Analyzing Large Language Model Behavior via Embedding Analysis

Open Access
Article
Conference Proceedings
Authors: Sourya DeyMichael RobinsonShauna SweetAndrew LauziereJonathan DaughertyCaitlin Burgess

Abstract: The usage of large language models (LLMs) as a generative artificial intelligence tool is becoming increasingly widespread, yet there is limited understanding of the mechanisms by which prompts in whole or in part influence their behavior, capabilities, and limitations. In this paper, the authors conduct a mathematical and topological analysis of token embeddings – the first step in the computational workflow of LLMs. This work shows that the subspace where token embeddings lie is a stratified manifold with varying local dimension, and in those cases where semantically related tokens are co-located on a submanifold, there are non-trivial implications for model behavior. These topological and geometric findings help to explain performance aspects of different LLMs such as why the Llemma model is more likely to overfit than the GPT-2 model, yet the latter does worse at mathematical queries than the former. To the best of the authors’ knowledge, this paper is among the first to conduct such research into the topological characterization of the token embedding space and analyze LLM behavior starting from first principles.

Keywords: Large language models, Generative artificial intelligence, Machine learning, Emerging technologies

DOI: 10.54941/ahfe1005724

Cite this paper:

Downloads
7
Visits
57
Download