During 2023–24, the CMSA will host a seminar on New Technologies in Mathematics, organized by Michael Douglas. This seminar will take place on Wednesdays from 2:00 pm–3:00 pm (Eastern Time). The meetings will take place in Room G10 at the CMSA, 20 Garden Street, Cambridge MA 02138, and some meetings will take place virtually on Zoom or be held in hybrid formats. To learn how to attend, please fill out this form, or contact Michael Douglas (mdouglas@cmsa.fas.harvard.edu). The schedule will be updated as talks are confirmed. Seminar videos can be found at the CMSA Youtube site New Technologies in Mathematics Playlist

CMSA COVID-19 Policies



  • December 06, 2023 02:00 PM
Speaker: Katherine Collins and Albert Jiang
Title: Evaluating Language Models for Mathematics through Interactions
Venue: CMSA Room G10

New Technologies in Mathematics Seminar Speakers: Katherine Collins and Albert Jiang, Department of Computer Science and Technology, University of Cambridge Title: Evaluating Language Models for Mathematics through Interactions Abstract: There is much excitement about the opportunity to harness the power of large language models (LLMs) when building problem-solving assistants. However, the standard methodology of evaluating LLMs based on static pairs of inputs and outputs is insufficient to be able to make an informed decision about which LLMs, and under what assistive settings they can be sensibly utilised. Static assessment fails to take into account the essential interactive element in their deployment, and therefore limits how we understand language model capabilities. In this talk, we present our recent work introducing CheckMate, an…

  • November 15, 2023 02:00 PM
Speaker: Abhishek Panigrahi
Title: On the Power of Forward pass through Transformer Architectures
Venue: CMSA Room G10

New Technologies in Mathematics Seminar Speaker: Abhishek Panigrahi, Dept. of Computer Science, Princeton University Title: On the Power of Forward pass through Transformer Architectures Abstract: Highly trained transformers are capable of interesting computations as they infer for an input. The exact mechanism that these models use during forward passes is an interesting area of study. This talk studies two interesting phenomena. In the first half, we explore how and why pre-trained language models, specifically BERT of moderate sizes, can effectively learn linguistic structures like parse trees during pre-training. Specifically, using synthetic data through PCFGs, we show how moderate-sized transformers can perform forward-backward parsing, also known as the inside-outside algorithm, during inference. We further understand the role of…

  • November 08, 2023 02:00 PM
Speaker: Gabriel Poesia
Title: Peano: Learning Formal Mathematical Reasoning Without Human Data
Venue: CMSA Room G10

New Technologies in Mathematics Seminar Speaker: Gabriel Poesia, Dept. of Computer Science, Stanford University Title: Peano: Learning Formal Mathematical Reasoning Without Human Data Abstract: Peano is a theorem proving environment in which a computational agent can start tabula rasa in a new domain, learn to solve problems through curiosity-driven exploration, and create its own higher level actions. Gabriel will describe the system, present case studies on learning to solve simple algebra problems from the Khan Academy platform, and describe work on progress on learning the Natural Number Game, a popular introduction to theorem proving in Lean for mathematicians.  

  • October 25, 2023 02:00 PM
Speaker: Sean Welleck
Title: Llemma: an open language model for mathematics
Venue: CMSA Room G10

New Technologies in Mathematics Seminar Speaker: Sean Welleck, CMU, Language Technologies Institute Title: Llemma: an open language model for mathematics Abstract: We present Llemma: 7 billion and 34 billion parameter language models for mathematics. The Llemma models are initialized with Code Llama weights, then trained on the Proof-Pile II, a 55 billion token dataset of mathematical web data, code, and scientific papers. The resulting models show improved mathematical capabilities, and can be adapted to various tasks. For instance, Llemma outperforms the unreleased Minerva model suite on an equi-parameter basis, and is capable of tool use and formal theorem proving without any further fine-tuning. We openly release all artifacts, including the Llemma models, the Proof-Pile II, and code to…

  • October 18, 2023 02:00 PM
Speaker: Yuanzhi Li
Title: Physics of Language Models: Knowledge Storage, Extraction, and Manipulation
Venue: CMSA Room G10

New Technologies in Mathematics Seminar Speaker: Yuanzhi Li, CMU Dept. of Machine Learning and Microsoft Research Title: Physics of Language Models: Knowledge Storage, Extraction, and Manipulation Abstract: Large language models (LLMs) can memorize a massive amount of knowledge during pre-training, but can they effectively use this knowledge at inference time? In this work, we show several striking results about this question. Using a synthetic biography dataset, we first show that even if an LLM achieves zero training loss when pretraining on the biography dataset, it sometimes can not be finetuned to answer questions as simple as “What is the birthday of XXX” at all. We show that sufficient data augmentation during pre-training, such as rewriting the same biography multiple times or…

  • October 11, 2023 02:00 PM
Speaker: Alex Gu
Title: LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Venue: CMSA Room G10

New Technologies in Mathematics Seminar Speaker: Alex Gu, MIT Dept. of EE&CS Title: LeanDojo: Theorem Proving with Retrieval-Augmented Language Models Abstract: Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. We introduce LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selection: a key bottleneck in theorem proving. Using…

  • September 27, 2023 02:00 PM
Speaker: François Charton
Title: Transformers for maths, and maths for transformers
Venue: CMSA Room G10

New Technologies in Mathematics Seminar Speaker: François Charton, Meta AI Title:  Transformers for maths, and maths for transformers Abstract: Transformers can be trained to solve problems of mathematics. I present two recent applications, in mathematics and physics: predicting integer sequences, and discovering the properties of scattering amplitudes in a close relative of Quantum ChromoDynamics. Problems of mathematics can also help understand transformers. Using two examples from linear algebra and integer arithmetic, I show that model predictions can be explained, that trained models do not confabulate, and that carefully choosing the training distributions can help achieve better, and more robust, performance.      

  • September 20, 2023 02:00 PM
Speaker: Ronen Eldan
Title: The TinyStories Dataset: How Small Can Language Models Be And Still Speak Coherent
Venue: virtual

New Technologies in Mathematics Seminar Speaker: Ronen Eldan, Microsoft Research Title: The TinyStories Dataset: How Small Can Language Models Be And Still Speak Coherent Abstract: While generative language models exhibit powerful capabilities at large scale, when either the model or the number of training steps is too small, they struggle to produce coherent and fluent text: Existing models whose size is below a few billion parameters often do not generate coherent text beyond a few sentences. Hypothesizing that one of the main reasons for the strong reliance on size is the vast breadth and abundance of patterns in the datasets used to train those models, this motivates the following question: Can we design a dataset that preserves the…

  • May 10, 2023 02:00 PM
Speaker: Dmitry Krotov
Title: Modern Hopfield Networks for Novel Transformer Architectures
Venue: virtual

New Technologies in Mathematics Seminar Speaker: Dmitry Krotov, IBM Research – Cambridge Title: Modern Hopfield Networks for Novel Transformer Architectures Abstract: Modern Hopfield Networks or Dense Associative Memories are recurrent neural networks with fixed point attractor states that are described by an energy function. In contrast to conventional Hopfield Networks, which were popular in the 1980s, their modern versions have a very large memory storage capacity, which makes them appealing tools for many problems in machine learning and cognitive and neurosciences. In this talk, I will introduce an intuition and a mathematical formulation of this class of models and will give examples of problems in AI that can be tackled using these new ideas. Particularly, I will…

  • April 26, 2023 02:00 PM
Speaker: Timo Schick
Title: Toolformer: Language Models Can Teach Themselves to Use Tools
Venue: virtual

New Technologies in Mathematics Seminar Speaker: Timo Schick, Meta AI Title: Toolformer: Language Models Can Teach Themselves to Use Tools Abstract: Language models exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale. They also, paradoxically, struggle with basic functionality, such as arithmetic or factual lookup, where much simpler and smaller models excel. In this talk, we show how these limitations can be overcome by letting language models teach themselves to use external tools via simple APIs. We discuss Toolformer, a model trained to independently decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction. Through this, it achieves substantially…

  • March 08, 2023 02:00 PM
Speaker: Jimmy Ba
Title: How to steer foundation models?
Venue: CMSA Room G10

New Technologies in Mathematics Seminar Speaker: Jimmy Ba, University of Toronto Title: How to steer foundation models? Abstract: By conditioning on natural language instructions, foundation models and large language models (LLMs) have displayed impressive capabilities as general-purpose computers. However, task performance depends significantly on the quality of the prompt used to steer the model. Due to the lack of knowledge of how foundation models work, most effective prompts have been handcrafted by humans through a demanding trial-and-error process. To reduce the human effort in this alignment process, I will discuss a few approaches to steer these powerful models to excel in various downstream language and image tasks.  

  • December 07, 2022 02:00 PM
Speaker: Cyril Zhang
Title: How do Transformers reason? First principles via automata, semigroups, and circuits
Venue: CMSA Room G10

New Technologies in Mathematics Seminar Speaker: Cyril Zhang, Microsoft Research Title: How do Transformers reason? First principles via automata, semigroups, and circuits Abstract: The current “Transformer era” of deep learning is marked by the emergence of combinatorial and algorithmic reasoning capabilities in large sequence models, leading to dramatic advances in natural language understanding, program synthesis, and theorem proving. What is the nature of these models’ internal representations (i.e. how do they represent the states and computational steps of the algorithms they execute)? How can we understand and mitigate their weaknesses, given that they resist interpretation? In this work, we present some insights (and many further mysteries) through the lens of automata and their algebraic structure. Specifically, we investigate the apparent…

  • October 26, 2022 02:00 PM
Speaker: João Araújo, Mathematics Department, Universidade Nova de Lisboa and Michael Kinyon, Department of Mathematics, University of Denver
Title: From Engine to Auto
Venue: CMSA Room G10

New Technologies in Mathematics Seminar Speakers: João Araújo, Mathematics Department, Universidade Nova de Lisboa and Michael Kinyon, Department of Mathematics, University of Denver Title: From Engine to Auto Abstract: Bill McCune produced the program EQP that deals with first order logic formulas and in 1996 managed to solve Robbins’ Conjecture. This very powerful tool reduces to triviality any result that can be obtained by encoding the assumptions and the goals. The next step was to turn the program into a genuine assistant for the working mathematician: find ways to help the prover with proofs; reduce the lengths of the automatic proofs to better crack them;  solve problems in higher order logic; devise tools that autonomously prove results of…

  • October 19, 2022 02:00 PM
Speaker: Antonia Creswell
Title: Towards Faithful Reasoning Using Language Models
Venue: CMSA Room G10

New Technologies in Mathematics Seminar Speaker: Antonia Creswell, DeepMind Title: Towards Faithful Reasoning Using Language Models Abstract: Language models are showing impressive performance on many natural language tasks, including question-answering. However, language models – like most deep learning models – are black boxes. We cannot be sure how they obtain their answers. Do they reason over relevant knowledge to construct an answer or do they rely on prior knowledge – baked into their weights – which may be biased? An alternative approach is to develop models whose output is a human interpretable, faithful reasoning trace leading to an answer. In this talk we will characterise faithful reasoning in terms of logically valid reasoning and demonstrate where current reasoning…

  • October 05, 2022 02:00 PM
Speaker: Guy Gur-Ari
Title: Minerva: Solving Quantitative Reasoning Problems with Language Models
Venue: CMSA Room G10

New Technologies in Mathematics Seminar Speaker: Guy Gur-Ari, Google Research Title: Minerva: Solving Quantitative Reasoning Problems with Language Models Abstract: Quantitative reasoning tasks which can involve mathematics, science, and programming are often challenging for machine learning models in general and for language models in particular. We show that transformer-based language models obtain significantly better performance on math and science questions when trained in an unsupervised way on a large, math-focused dataset. Performance can be further improved using prompting and sampling techniques including chain-of-thought and majority voting. Minerva, a model that combines these techniques, achieves SOTA on several math and science benchmarks. I will describe the model, its capabilities and limitations.

  • September 28, 2022 02:00 PM
Speaker: Surya Ganguli
Title: Statistical mechanics of neural networks: From the geometry of high dimensional error landscapes to beating power law neural scaling
Venue: CMSA Room G10

New Technologies in Mathematics Speaker: Surya Ganguli, Stanford University Title: Statistical mechanics of neural networks: From the geometry of high dimensional error landscapes to beating power law neural scaling Abstract: Statistical mechanics and neural network theory have long enjoyed fruitful interactions.  We will review some of our recent work in this area and then focus on two vignettes. First we will analyze the high dimensional geometry of neural network error landscapes that happen to arise as the classical limit of a dissipative many-body quantum optimizer.  In particular, we will be able to use the Kac-Rice formula and the replica method to calculate the number, location, energy levels, and Hessian eigenspectra of all critical points of any index.  Second we will review…

  • September 14, 2022 02:00 PM
Speaker: Johan Commelin
Title: Breaking the one-mind-barrier in mathematics using formal verification
Venue: CMSA Room G10

New Technologies in Mathematics Seminar Speaker: Johan Commelin, Mathematisches Institut, Albert-Ludwigs-Universität Freiburg Title: Breaking the one-mind-barrier in mathematics using formal verification Abstract: In this talk I will argue that formal verification helps break the one-mind-barrier in mathematics. Indeed, formal verification allows a team of mathematicians to collaborate on a project, without one person understanding all parts of the project. At the same time, it also allows a mathematician to rapidly free mental RAM in order to work on a different component of a project. It thus also expands the one-mind-barrier. I will use the Liquid Tensor Experiment as an example, to illustrate the above two points. This project recently finished the formalization of the main theorem of liquid vector spaces, following up on a challenge by…

  • March 30, 2022 06:15 PM
Speaker: Yuhuai Wu, Stanford and Google
Title: Memorizing Transformers
Venue: Virtual

Abstract: Language models typically need to be trained or fine-tuned in order to acquire new knowledge, which involves updating their weights. We instead envision language models that can simply read and memorize new data at inference time, thus acquiring new knowledge immediately. In this talk, I will discuss how we extend language models with the ability to memorize the internal representations of past inputs. We demonstrate that an approximate NN lookup into a non-differentiable memory of recent (key, value) pairs improves language modeling across various benchmarks and tasks, including generic webtext (C4), math papers (arXiv), books (PG-19), code (Github), as well as formal theorems (Isabelle). We show that the performance steadily improves when we increase the size of memory…

  • March 23, 2022 06:14 PM
Speaker: Stanislas Polu, OpenAI
Title: Formal Mathematics Statement Curriculum Learning
Venue: Virtual

Abstract: We explore the use of expert iteration in the context of language modeling applied to formal mathematics. We show that at same compute budget, expert iteration, by which we mean proof search interleaved with learning, dramatically outperforms proof search only.  We also observe that when applied to a collection of formal statements of sufficiently varied difficulty, expert iteration is capable of finding and solving a curriculum of increasingly difficult problems,  without the need for associated ground-truth proofs. Finally, by applying this expert iteration to a manually curated set of problem statements, we achieve state-of-the-art on the miniF2F benchmark,  automatically solving multiple challenging problems drawn from high school olympiads.

  • March 09, 2022 06:13 PM
Speaker: Iddo Drori, MIT EE&CS and Columbia School of Engineering
Title: Machine Learning 30 STEM Courses in 12 Departments
Venue: Virtual

Abstract: We automatically solve, explain, and generate university-level course problems from thirty STEM courses (at MIT, Harvard, and Columbia) for the first time. We curate a new dataset of course questions and answers across a dozen departments: Aeronautics and Astronautics, Chemical Engineering, Chemistry, Computer Science, Economics, Electrical Engineering, Materials Science, Mathematics, Mechanical Engineering, Nuclear Science, Physics, and Statistics. We generate new questions and use them in a Columbia University course, and perform A/B tests demonstrating that these machine generated questions are indistinguishable from human-written questions and that machine generated explanations are as useful as human-written explanations, again for the first time. Our approach consists of five steps: (i) Given course questions, turn them into programming tasks; (ii) Automatically generate…