Physics of Language Models: Knowledge Storage, Extraction, and Manipulation

10/18/2023 2:00 pm - 3:00 pm
CMSA Room G10
Address: CMSA, 20 Garden Street, Cambridge, MA 02138 USA

New Technologies in Mathematics Seminar

Speaker: Yuanzhi Li, CMU Dept. of Machine Learning and Microsoft Research

Title: Physics of Language Models: Knowledge Storage, Extraction, and Manipulation

Abstract: Large language models (LLMs) can memorize a massive amount of knowledge during pre-training, but can they effectively use this knowledge at inference time? In this work, we show several striking results about this question. Using a synthetic biography dataset, we first show that even if an LLM achieves zero training loss when pretraining on the biography dataset, it sometimes can not be finetuned to answer questions as simple as “What is the birthday of XXX” at all. We show that sufficient data augmentation during pre-training, such as rewriting the same biography multiple times or simply using the person’s full name in every sentence, can mitigate this issue. Using linear probing, we unravel that such augmentation forces the model to store knowledge about a person in the token embeddings of their name rather than other locations.

We then show that LLMs are very bad at manipulating knowledge they learn during pre-training unless a chain of thought is used at inference time. We pretrained an LLM on the synthetic biography dataset, so that it could answer “What is the birthday of XXX” with 100% accuracy.  Even so, it could not be further fine-tuned to answer questions like “Is the birthday of XXX even or odd?” directly.  Even using Chain of Thought training data only helps the model answer such questions in a CoT manner, not directly.

We will also discuss preliminary progress on understanding the scaling law of how large a language model needs to be to store X pieces of knowledge and extract them efficiently. For example, is a 1B parameter language model enough to store all the knowledge of a middle school student?