The Magnitude of Categories of Texts Enriched by Language Models

Date:

January 22, 2025

2025

Type:

Preprint

Publication:

ArXiv

Author(s):

Tai-Danae Bradley, Juan Pablo Vigneaux

Abstract

The purpose of this article is twofold. Firstly, we use the next-token probabilities given by a language model to explicitly define a [0,1]-enrichment of a category of texts in natural language, in the sense of Bradley, Terilla, and Vlassopoulos. We consider explicitly the terminating conditions for text generation and determine when the enrichment itself can be interpreted as a probability over texts. Secondly, we compute the Möbius function and the magnitude of an associated generalized metric space M of texts using a combinatorial version of these quantities recently introduced by Vigneaux. The magnitude function f(t) of M is a sum over texts x (prompts) of the Tsallis t-entropies of the next-token probability distributions p(−|x) plus the cardinality of the model's possible outputs. The derivative of f at t=1 recovers a sum of Shannon entropies, which justifies seeing magnitude as a partition function. Following Leinster and Schulman, we also express the magnitude function of M as an Euler characteristic of magnitude homology and provide an explicit description of the zeroeth and first magnitude homology groups.

Download Paper
Back to all publications