BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CMSA - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:CMSA
X-ORIGINAL-URL:https://cmsa.fas.harvard.edu
X-WR-CALDESC:Events for CMSA
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241002T140000
DTEND;TZID=America/New_York:20241002T150000
DTSTAMP:20260508T041654
CREATED:20240907T180645Z
LAST-MODIFIED:20241002T195652Z
UID:10003453-1727877600-1727881200@cmsa.fas.harvard.edu
SUMMARY:Hierarchical data structures through the lenses of diffusion models
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Antonio Sclocchi\, EPFL \nTitle: Hierarchical data structures through the lenses of diffusion models \nAbstract: The success of deep learning with high-dimensional data relies on the fact that natural data are highly structured. A key aspect of this structure is hierarchical compositionality\, yet quantifying it remains a challenge. \nIn this talk\, we explore how diffusion models can serve as a tool to probe the hierarchical structure of data. We consider a context-free generative model of hierarchical data and show the distinct behaviors of high- and low-level features during a noising-denoising process. Specifically\, we find that high-level features undergo a sharp transition in reconstruction probability at a specific noise level\, while low-level features recombine into new data from different classes. This behavior of latent features leads to correlated changes in real-space variables\, resulting in a diverging correlation length at the transition. \nWe validate these predictions in experiments with real data\, using state-of-the-art diffusion models for both images and texts. Remarkably\, both modalities exhibit a growing correlation length in changing features at the transition of the noising-denoising process. \nOverall\, these results highlight the potential of hierarchical models in capturing non-trivial data structures and offer new theoretical insights for understanding generative AI.
URL:https://cmsa.fas.harvard.edu/event/newtech_10224/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-10.2.24.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240925T140000
DTEND;TZID=America/New_York:20240925T150000
DTSTAMP:20260508T041655
CREATED:20240907T180716Z
LAST-MODIFIED:20241002T144226Z
UID:10003454-1727272800-1727276400@cmsa.fas.harvard.edu
SUMMARY:Infinite Limits and Scaling Laws for Deep Neural Networks
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Blake Bordelon \nTitle: Infinite Limits and Scaling Laws for Deep Neural Networks \nAbstract: Scaling up the size and training horizon of deep learning models has enabled breakthroughs in computer vision and natural language processing. Empirical evidence suggests that these neural network models are described by regular scaling laws where performance of finite parameter models improves as model size increases\, eventually approaching a limit described by the performance of an infinite parameter model. In this talk\, we will first examine certain infinite parameter limits of deep neural networks which preserve representation learning and then describe how quickly finite models converge to these limits. Using dynamical mean field theory methods\, we provide an asymptotic description of the learning dynamics of randomly initialized infinite width and depth networks. Next\, we will empirically investigate how close the training dynamics of finite networks are to these idealized limits. Lastly\, we will provide a theoretical model of neural scaling laws which describes how generalization depends on three computational resources: training time\, model size and data quantity. This theory allows analysis of compute optimal scaling strategies and predicts how model size and training time should be scaled together in terms of spectral properties of the limiting kernel. The theory also predicts how representation learning can improve neural scaling laws in certain regimes. For very hard tasks\, the theory predicts that representation learning can approximately double the training-time exponent compared to the static kernel limit.
URL:https://cmsa.fas.harvard.edu/event/newtech_92524/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-9.25.24.docx-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240320T140000
DTEND;TZID=America/New_York:20240320T150000
DTSTAMP:20260508T041655
CREATED:20240130T215041Z
LAST-MODIFIED:20240321T140550Z
UID:10001519-1710943200-1710946800@cmsa.fas.harvard.edu
SUMMARY:Solving olympiad geometry without human demonstrations
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Trieu H. Trinh\, Google Deepmind and NYU Dept. of Computer Science \nTitle: Solving olympiad geometry without human demonstrations \nAbstract: Proving mathematical theorems at the olympiad level represents a notable milestone in human-level automated reasoning\, owing to their reputed difficulty among the world’s best talents in pre-university mathematics. Current machine-learning approaches\, however\, are not applicable to most mathematical domains owing to the high cost of translating human proofs into machine-verifiable format. The problem is even worse for geometry because of its unique translation challenges\, resulting in severe scarcity of training data. We propose AlphaGeometry\, a theorem prover for Euclidean plane geometry that sidesteps the need for human demonstrations by synthesizing millions of theorems and proofs across different levels of complexity. AlphaGeometry is a neuro-symbolic system that uses a neural language model\, trained from scratch on our large-scale synthetic data\, to guide a symbolic deduction engine through infinite branching points in challenging problems. On a test set of 30 latest olympiad-level problems\, AlphaGeometry solves 25\, outperforming the previous best method that only solves ten problems and approaching the performance of an average International Mathematical Olympiad (IMO) gold medallist. Notably\, AlphaGeometry produces human-readable proofs\, solves all geometry problems in the IMO 2000 and 2015 under human expert evaluation and discovers a generalized version of a translated IMO theorem in 2004. \n 
URL:https://cmsa.fas.harvard.edu/event/nt-32024/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-03.20.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240306T140000
DTEND;TZID=America/New_York:20240306T150000
DTSTAMP:20260508T041655
CREATED:20240108T153449Z
LAST-MODIFIED:20240306T221235Z
UID:10001129-1709733600-1709737200@cmsa.fas.harvard.edu
SUMMARY:LILO: Learning Interpretable Libraries by Compressing and Documenting Code
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Gabe Grand\, MIT CSAIL and Dept. of EE&CS \nTitle: LILO: Learning Interpretable Libraries by Compressing and Documenting Code \nAbstract: While large language models (LLMs) now excel at code generation\, a key aspect of software development is the art of refactoring: consolidating code into libraries of reusable and readable programs. In this paper\, we introduce LILO\, a neurosymbolic framework that iteratively synthesizes\, compresses\, and documents code to build libraries tailored to particular problem domains. LILO combines LLM-guided program synthesis with recent algorithmic advances in automated refactoring from Stitch: a symbolic compression system that efficiently identifies optimal lambda abstractions across large code corpora. To make these abstractions interpretable\, we introduce an auto-documentation (AutoDoc) procedure that infers natural language names and docstrings based on contextual examples of usage. In addition to improving human readability\, we find that AutoDoc boosts performance by helping LILO’s synthesizer to interpret and deploy learned abstractions. We evaluate LILO on three inductive program synthesis benchmarks for string editing\, scene reasoning\, and graphics composition. Compared to existing neural and symbolic methods – including the state-of-the-art library learning algorithm DreamCoder – LILO solves more complex tasks and learns richer libraries that are grounded in linguistic knowledge.
URL:https://cmsa.fas.harvard.edu/event/nt-3624/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-03.06.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240221T140000
DTEND;TZID=America/New_York:20240221T150000
DTSTAMP:20260508T041655
CREATED:20240105T034012Z
LAST-MODIFIED:20240223T152643Z
UID:10001113-1708524000-1708527600@cmsa.fas.harvard.edu
SUMMARY:Computers and mathematics in partial differential equations: New developments and challenges
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Javier Gomez Serrano\, Brown University \nTitle: Computers and mathematics in partial differential equations: new developments and challenges \nAbstract: In this talk I will address the interaction between traditional and more modern mathematics and how computers have helped over the last decade providing rigorous (computer-assisted) proofs in the context of partial differential equations. I will also describe new exciting future directions in the field. No background is assumed.
URL:https://cmsa.fas.harvard.edu/event/nt-22124/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-02.21.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240214T140000
DTEND;TZID=America/New_York:20240214T150000
DTSTAMP:20260508T041655
CREATED:20240102T164110Z
LAST-MODIFIED:20240130T194619Z
UID:10000151-1707919200-1707922800@cmsa.fas.harvard.edu
SUMMARY:What Algorithms can Transformers Learn? A Study in Length Generalization
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Preetum Nakkiran\, Apple \nTitle: What Algorithms can Transformers Learn? A Study in Length Generalization \nAbstract: Large language models exhibit many surprising “out-of-distribution” generalization abilities\, yet also struggle to solve certain simple tasks like decimal addition. To clarify the scope of Transformers’ out-of-distribution generalization\, we isolate this behavior in a specific controlled setting: length-generalization on algorithmic tasks. Eg: Can a model trained on 10 digit addition generalize to 50 digit addition? For which tasks do we expect this to work? \nOur key tool is the recently-introduced RASP language (Weiss et al 2021)\, which is a programming language tailor-made for the Transformer’s computational model. We conjecture\, informally\, that: Transformers tend to length-generalize on a task if there exists a short RASP program that solves the task for all input lengths. This simple conjecture remarkably captures most known instances of length generalization on algorithmic tasks\, and can also inform design of effective scratchpads. Finally\, on the theoretical side\, we give a simple separating example between our conjecture and the “min-degree-interpolator” model of learning from Abbe et al. (2023). \nJoint work with Hattie Zhou\, Arwen Bradley\, Etai Littwin\, Noam Razin\, Omid Saremi\, Josh Susskind\, and Samy Bengio. To appear in ICLR 2024. \n 
URL:https://cmsa.fas.harvard.edu/event/nt21424/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-02.14.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240207T130000
DTEND;TZID=America/New_York:20240207T140000
DTSTAMP:20260508T041655
CREATED:20240102T163838Z
LAST-MODIFIED:20240207T220617Z
UID:10000149-1707310800-1707314400@cmsa.fas.harvard.edu
SUMMARY:Large language models\, mathematical discovery\, and search in the space of strategies: an anecdote
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Jordan Ellenberg (UW Madison) \nTitle: Large language models\, mathematical discovery\, and search in the space of strategies: an anecdote \nAbstract: I spent a portion of 2023 working with a team at DeepMind on the “cap set problem” – how large can a subset of (Z/3Z)^n be which contains no three terms which sum to zero? (I will explain\, for those not familiar with this problem\, something about the role it plays in combinatorics\, its history\, and why number theorists care about it a lot.) By now\, there are many examples of machine learning mechanisms being used to help generate interesting mathematical knowledge\, and especially interesting examples. This project used a novel protocol; instead of searching directly for large cap sets\, we used LLMs trained on code to search the space of short programs for those which\, when executed\, output large cap sets. One advantage is that a program is much more human-readable than a large collection of vectors over Z/3Z\, bringing us closer to the not-very-well-defined-but-important goal of “interpretable machine learning.” I’ll talk about what succeeded in this project (more than I expected!) what didn’t\, and what role I can imagine this approach to the math-ML interface playing in near-future mathematical practice. \nThe paper: https://www.nature.com/articles/s41586-023-06924-6 \n 
URL:https://cmsa.fas.harvard.edu/event/nt2724/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-02.07.24.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240124T140000
DTEND;TZID=America/New_York:20240124T150000
DTSTAMP:20260508T041655
CREATED:20240102T163450Z
LAST-MODIFIED:20240125T165049Z
UID:10000148-1706104800-1706108400@cmsa.fas.harvard.edu
SUMMARY:Approaches to the formalization of differential geometry
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Heather Macbeth\, Fordham University \nTitle: Approaches to the formalization of differential geometry \nAbstract: In the last five years\, there has been early work on the computer formalization of differential geometry. I will survey the projects I am aware of. I will also describe two projects of my own\, as case studies for typical challenges. The first (joint with Floris van Doorn) is an exercise in developing suitable abstractions\, the second (joint with Mario Carneiro) is an exercise in developing suitable automation.
URL:https://cmsa.fas.harvard.edu/event/nt-12424/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-01.24.2024.docx-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20231115T140000
DTEND;TZID=America/New_York:20231115T150000
DTSTAMP:20260508T041655
CREATED:20240222T094758Z
LAST-MODIFIED:20240222T095355Z
UID:10002797-1700056800-1700060400@cmsa.fas.harvard.edu
SUMMARY:On the Power of Forward pass through Transformer Architectures
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Abhishek Panigrahi\, Dept. of Computer Science\, Princeton University \nTitle: On the Power of Forward pass through Transformer Architectures \nAbstract: Highly trained transformers are capable of interesting computations as they infer for an input. The exact mechanism that these models use during forward passes is an interesting area of study. This talk studies two interesting phenomena. \nIn the first half\, we explore how and why pre-trained language models\, specifically BERT of moderate sizes\, can effectively learn linguistic structures like parse trees during pre-training. Specifically\, using synthetic data through PCFGs\, we show how moderate-sized transformers can perform forward-backward parsing\, also known as the inside-outside algorithm\, during inference. We further understand the role of the pre-training loss for the model to learn to parse during pre-training. \nIn the second half\, we consider in-context learning of large language models\, where they learn to reason on the fly. An ongoing hypothesis is that transformers simulate gradient descent at inference to perform in-context learning. We propose the Transformer in Transformer (TinT) framework\, which creates explicit transformer architectures that can simulate and fine-tune a small pre-trained transformer model during inference. E.g. a 1.3B parameter TINT model can simulate and fine-tune a 125 million parameter model in a single forward pass. This framework suggests that large transformers might execute intricate sub-routines during inference\, and provides insights for enhancing their capabilities through intelligent design considerations. \n 
URL:https://cmsa.fas.harvard.edu/event/nt-111523/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/NTM-11.15.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20231108T140000
DTEND;TZID=America/New_York:20231108T150000
DTSTAMP:20260508T041655
CREATED:20240222T095919Z
LAST-MODIFIED:20240222T095919Z
UID:10002798-1699452000-1699455600@cmsa.fas.harvard.edu
SUMMARY:Peano: Learning Formal Mathematical Reasoning Without Human Data
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Gabriel Poesia\, Dept. of Computer Science\, Stanford University \nTitle: Peano: Learning Formal Mathematical Reasoning Without Human Data \nAbstract: Peano is a theorem proving environment in which a computational agent can start tabula rasa in a new domain\, learn to solve problems through curiosity-driven exploration\, and create its own higher level actions. Gabriel will describe the system\, present case studies on learning to solve simple algebra problems from the Khan Academy platform\, and describe work on progress on learning the Natural Number Game\, a popular introduction to theorem proving in Lean for mathematicians. \n 
URL:https://cmsa.fas.harvard.edu/event/nt-11823/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/NTM-11.08.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20231025T140000
DTEND;TZID=America/New_York:20231025T150000
DTSTAMP:20260508T041655
CREATED:20240223T105453Z
LAST-MODIFIED:20240223T105453Z
UID:10002853-1698242400-1698246000@cmsa.fas.harvard.edu
SUMMARY:Llemma: an open language model for mathematics
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Sean Welleck\, CMU\, Language Technologies Institute \nTitle: Llemma: an open language model for mathematics \nAbstract: We present Llemma: 7 billion and 34 billion parameter language models for mathematics. The Llemma models are initialized with Code Llama weights\, then trained on the Proof-Pile II\, a 55 billion token dataset of mathematical web data\, code\, and scientific papers. The resulting models show improved mathematical capabilities\, and can be adapted to various tasks. For instance\, Llemma outperforms the unreleased Minerva model suite on an equi-parameter basis\, and is capable of tool use and formal theorem proving without any further fine-tuning. We openly release all artifacts\, including the Llemma models\, the Proof-Pile II\, and code to replicate our experiments. We hope that Llemma serves as a platform for new research and tools at the intersection of generative models and mathematics. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/nt-102523/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/NTM-10.25.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20231018T140000
DTEND;TZID=America/New_York:20231018T150000
DTSTAMP:20260508T041655
CREATED:20240223T114049Z
LAST-MODIFIED:20240223T114049Z
UID:10002867-1697637600-1697641200@cmsa.fas.harvard.edu
SUMMARY:Physics of Language Models: Knowledge Storage\, Extraction\, and Manipulation
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Yuanzhi Li\, CMU Dept. of Machine Learning and Microsoft Research \nTitle: Physics of Language Models: Knowledge Storage\, Extraction\, and Manipulation \nAbstract: Large language models (LLMs) can memorize a massive amount of knowledge during pre-training\, but can they effectively use this knowledge at inference time? In this work\, we show several striking results about this question. Using a synthetic biography dataset\, we first show that even if an LLM achieves zero training loss when pretraining on the biography dataset\, it sometimes can not be finetuned to answer questions as simple as “What is the birthday of XXX” at all. We show that sufficient data augmentation during pre-training\, such as rewriting the same biography multiple times or simply using the person’s full name in every sentence\, can mitigate this issue. Using linear probing\, we unravel that such augmentation forces the model to store knowledge about a person in the token embeddings of their name rather than other locations. \nWe then show that LLMs are very bad at manipulating knowledge they learn during pre-training unless a chain of thought is used at inference time. We pretrained an LLM on the synthetic biography dataset\, so that it could answer “What is the birthday of XXX” with 100% accuracy.  Even so\, it could not be further fine-tuned to answer questions like “Is the birthday of XXX even or odd?” directly.  Even using Chain of Thought training data only helps the model answer such questions in a CoT manner\, not directly. \nWe will also discuss preliminary progress on understanding the scaling law of how large a language model needs to be to store X pieces of knowledge and extract them efficiently. For example\, is a 1B parameter language model enough to store all the knowledge of a middle school student? \n  \n 
URL:https://cmsa.fas.harvard.edu/event/nt-101823/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/NTM-10.18.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20231011T140000
DTEND;TZID=America/New_York:20231011T150000
DTSTAMP:20260508T041655
CREATED:20240223T114336Z
LAST-MODIFIED:20240223T114336Z
UID:10002868-1697032800-1697036400@cmsa.fas.harvard.edu
SUMMARY:LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Alex Gu\, MIT Dept. of EE&CS \nTitle: LeanDojo: Theorem Proving with Retrieval-Augmented Language Models \nAbstract: Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However\, existing methods are difficult to reproduce or build on\, due to private code\, data\, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. We introduce LeanDojo: an open-source Lean playground consisting of toolkits\, data\, models\, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs\, providing valuable data for premise selection: a key bottleneck in theorem proving. Using this data\, we develop ReProver (Retrieval-Augmented Prover): the first LLM-based prover that is augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo’s program analysis capability to identify accessible premises and hard negative examples\, which makes retrieval much more effective. Furthermore\, we construct a new benchmark consisting of 96\,962 theorems and proofs extracted from Lean’s math library. It features a challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation\, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research. \n 
URL:https://cmsa.fas.harvard.edu/event/nt-101123-2/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-10.11.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230927T140000
DTEND;TZID=America/New_York:20230927T150000
DTSTAMP:20260508T041655
CREATED:20240227T082824Z
LAST-MODIFIED:20240227T082824Z
UID:10002872-1695823200-1695826800@cmsa.fas.harvard.edu
SUMMARY:Transformers for maths\, and maths for transformers
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: François Charton\, Meta AI \nTitle:  Transformers for maths\, and maths for transformers \nAbstract: Transformers can be trained to solve problems of mathematics. I present two recent applications\, in mathematics and physics: predicting integer sequences\, and discovering the properties of scattering amplitudes in a close relative of Quantum ChromoDynamics. \nProblems of mathematics can also help understand transformers. Using two examples from linear algebra and integer arithmetic\, I show that model predictions can be explained\, that trained models do not confabulate\, and that carefully choosing the training distributions can help achieve better\, and more robust\, performance. \n  \n  \n 
URL:https://cmsa.fas.harvard.edu/event/nt-92723/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-09.27.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230920T140000
DTEND;TZID=America/New_York:20230920T150000
DTSTAMP:20260508T041655
CREATED:20240227T083355Z
LAST-MODIFIED:20240227T083355Z
UID:10002873-1695218400-1695222000@cmsa.fas.harvard.edu
SUMMARY:The TinyStories Dataset: How Small Can Language Models Be And Still Speak Coherent
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Ronen Eldan\, Microsoft Research \nTitle: The TinyStories Dataset: How Small Can Language Models Be And Still Speak Coherent \nAbstract: While generative language models exhibit powerful capabilities at large scale\, when either the model or the number of training steps is too small\, they struggle to produce coherent and fluent text: Existing models whose size is below a few billion parameters often do not generate coherent text beyond a few sentences. Hypothesizing that one of the main reasons for the strong reliance on size is the vast breadth and abundance of patterns in the datasets used to train those models\, this motivates the following question: Can we design a dataset that preserves the essential elements of natural language\, such as grammar\, vocabulary\, facts\, and reasoning\, but that is much smaller and more refined in terms of its breadth and diversity? \nIn this talk\, we introduce TinyStories\, a synthetic dataset of short stories that only contain words that 3 to 4-year-olds typically understand\, generated by GPT-3.5/4. We show that TinyStories can be used to train and analyze language models that are much smaller than the state-of-the-art models (below 10 million parameters)\, or have much simpler architectures (with only one transformer block)\, yet still produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar\, and demonstrate certain reasoning capabilities. We also show that the trained models are substantially more interpretable than larger ones\, as we can visualize and analyze the attention and activation patterns of the models\, and show how they relate to the generation process and the story content. We hope that TinyStories can facilitate the development\, analysis and research of language models\, especially for low-resource or specialized domains\, and shed light on the emergence of language capabilities in LMs. \n 
URL:https://cmsa.fas.harvard.edu/event/nt-92023/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-09.20.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230510T140000
DTEND;TZID=America/New_York:20230510T150000
DTSTAMP:20260508T041655
CREATED:20230809T105349Z
LAST-MODIFIED:20240228T104953Z
UID:10001225-1683727200-1683730800@cmsa.fas.harvard.edu
SUMMARY:Modern Hopfield Networks for Novel Transformer Architectures
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Dmitry Krotov\, IBM Research – Cambridge \nTitle: Modern Hopfield Networks for Novel Transformer Architectures \nAbstract: Modern Hopfield Networks or Dense Associative Memories are recurrent neural networks with fixed point attractor states that are described by an energy function. In contrast to conventional Hopfield Networks\, which were popular in the 1980s\, their modern versions have a very large memory storage capacity\, which makes them appealing tools for many problems in machine learning and cognitive and neurosciences. In this talk\, I will introduce an intuition and a mathematical formulation of this class of models and will give examples of problems in AI that can be tackled using these new ideas. Particularly\, I will introduce an architecture called Energy Transformer\, which replaces the conventional attention mechanism with a recurrent Dense Associative Memory model. I will explain the theoretical principles behind this architectural choice and show promising empirical results on challenging computer vision and graph network tasks.
URL:https://cmsa.fas.harvard.edu/event/nt-51023/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-05.10.23.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230426T140000
DTEND;TZID=America/New_York:20230426T150000
DTSTAMP:20260508T041655
CREATED:20230809T103350Z
LAST-MODIFIED:20240209T151145Z
UID:10001224-1682517600-1682521200@cmsa.fas.harvard.edu
SUMMARY:Toolformer: Language Models Can Teach Themselves to Use Tools
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Timo Schick\, Meta AI \nTitle: Toolformer: Language Models Can Teach Themselves to Use Tools \nAbstract: Language models exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions\, especially at scale. They also\, paradoxically\, struggle with basic functionality\, such as arithmetic or factual lookup\, where much simpler and smaller models excel. In this talk\, we show how these limitations can be overcome by letting language models teach themselves to use external tools via simple APIs. We discuss Toolformer\, a model trained to independently decide which APIs to call\, when to call them\, what arguments to pass\, and how to best incorporate the results into future token prediction. Through this\, it achieves substantially improved zero-shot performance across a variety of downstream tasks without sacrificing its core language modeling abilities. \n 
URL:https://cmsa.fas.harvard.edu/event/nt-42623/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-04.26.23.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230308T140000
DTEND;TZID=America/New_York:20230308T150000
DTSTAMP:20260508T041655
CREATED:20230808T190051Z
LAST-MODIFIED:20240223T154858Z
UID:10001812-1678284000-1678287600@cmsa.fas.harvard.edu
SUMMARY:How to steer foundation models?
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Jimmy Ba\, University of Toronto \nTitle: How to steer foundation models? \nAbstract: By conditioning on natural language instructions\, foundation models and large language models (LLMs) have displayed impressive capabilities as general-purpose computers. However\, task performance depends significantly on the quality of the prompt used to steer the model. Due to the lack of knowledge of how foundation models work\, most effective prompts have been handcrafted by humans through a demanding trial-and-error process. To reduce the human effort in this alignment process\, I will discuss a few approaches to steer these powerful models to excel in various downstream language and image tasks. \n 
URL:https://cmsa.fas.harvard.edu/event/nt-3823/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/03.08.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20221207T140000
DTEND;TZID=America/New_York:20221207T150000
DTSTAMP:20260508T041655
CREATED:20230808T185642Z
LAST-MODIFIED:20240116T060930Z
UID:10001215-1670421600-1670425200@cmsa.fas.harvard.edu
SUMMARY:How do Transformers reason? First principles via automata\, semigroups\, and circuits
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Cyril Zhang\, Microsoft Research \nTitle: How do Transformers reason? First principles via automata\, semigroups\, and circuits \nAbstract: The current “Transformer era” of deep learning is marked by the emergence of combinatorial and algorithmic reasoning capabilities in large sequence models\, leading to dramatic advances in natural language understanding\, program synthesis\, and theorem proving. What is the nature of these models’ internal representations (i.e. how do they represent the states and computational steps of the algorithms they execute)? How can we understand and mitigate their weaknesses\, given that they resist interpretation? In this work\, we present some insights (and many further mysteries) through the lens of automata and their algebraic structure. \nSpecifically\, we investigate the apparent mismatch between recurrent models of computation (automata & Turing machines) and Transformers (which are typically shallow and non-recurrent). Using tools from circuit complexity and semigroup theory\, we characterize shortcut solutions\, whereby a shallow Transformer with only o(T) layers can exactly replicate T computational steps of an automaton. We show that Transformers can efficiently represent these shortcuts in theory; furthermore\, in synthetic experiments\, standard training successfully finds these shortcuts. We demonstrate that shortcuts can lead to statistical brittleness\, and discuss mitigations. \nJoint work with Bingbin Liu\, Jordan Ash\, Surbhi Goel\, and Akshay Krishnamurthy.
URL:https://cmsa.fas.harvard.edu/event/nt-12722/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/12.07.2022.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20221026T140000
DTEND;TZID=America/New_York:20221026T150000
DTSTAMP:20260508T041655
CREATED:20230808T185319Z
LAST-MODIFIED:20240115T103149Z
UID:10001214-1666792800-1666796400@cmsa.fas.harvard.edu
SUMMARY:From Engine to Auto
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeakers: João Araújo\, Mathematics Department\, Universidade Nova de Lisboa and Michael Kinyon\, Department of Mathematics\, University of Denver \n\nTitle: From Engine to Auto \n\n\nAbstract: Bill McCune produced the program EQP that deals with first order logic formulas and in 1996 managed to solve Robbins’ Conjecture. This very powerful tool reduces to triviality any result that can be obtained by encoding the assumptions and the goals. The next step was to turn the program into a genuine assistant for the working mathematician: find ways to help the prover with proofs; reduce the lengths of the automatic proofs to better crack them;  solve problems in higher order logic; devise tools that autonomously prove results of a given type\, etc.\n\nIn this talk we are going to show some of the tools and strategies we have been producing. There will be real illustrations of theorems obtained for groups\, loops\, semigroups\, logic algebras\, lattices and generalizations\, quandles\, and many more.
URL:https://cmsa.fas.harvard.edu/event/nt-102622/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-10.26.2022.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20221019T140000
DTEND;TZID=America/New_York:20221019T150000
DTSTAMP:20260508T041655
CREATED:20230808T184955Z
LAST-MODIFIED:20240215T095357Z
UID:10001213-1666188000-1666191600@cmsa.fas.harvard.edu
SUMMARY:Towards Faithful Reasoning Using Language Models
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Antonia Creswell\, DeepMind \nTitle: Towards Faithful Reasoning Using Language Models \nAbstract: Language models are showing impressive performance on many natural language tasks\, including question-answering. However\, language models – like most deep learning models – are black boxes. We cannot be sure how they obtain their answers. Do they reason over relevant knowledge to construct an answer or do they rely on prior knowledge – baked into their weights – which may be biased? An alternative approach is to develop models whose output is a human interpretable\, faithful reasoning trace leading to an answer. In this talk we will characterise faithful reasoning in terms of logically valid reasoning and demonstrate where current reasoning models fall short. Following this\, we will introduce Selection-Inference\, a faithful reasoning model\, whose causal structure mirrors the requirements for valid reasoning. We will show that our model not only produces more accurate reasoning traces but also improves final answer accuracy. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/nt-101922/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/10.19.2022.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20221005T140000
DTEND;TZID=America/New_York:20221005T160000
DTSTAMP:20260508T041655
CREATED:20230808T184616Z
LAST-MODIFIED:20240214T110102Z
UID:10001212-1664978400-1664985600@cmsa.fas.harvard.edu
SUMMARY:Minerva: Solving Quantitative Reasoning Problems with Language Models
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Guy Gur-Ari\, Google Research \nTitle: Minerva: Solving Quantitative Reasoning Problems with Language Models \nAbstract: Quantitative reasoning tasks which can involve mathematics\, science\, and programming are often challenging for machine learning models in general and for language models in particular. We show that transformer-based language models obtain significantly better performance on math and science questions when trained in an unsupervised way on a large\, math-focused dataset. Performance can be further improved using prompting and sampling techniques including chain-of-thought and majority voting. Minerva\, a model that combines these techniques\, achieves SOTA on several math and science benchmarks. I will describe the model\, its capabilities and limitations.
URL:https://cmsa.fas.harvard.edu/event/minerva-solving-quantitative-reasoning-problems-with-language-models/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/10.05.2022.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220928T140000
DTEND;TZID=America/New_York:20220928T150000
DTSTAMP:20260508T041655
CREATED:20230808T184138Z
LAST-MODIFIED:20240214T110335Z
UID:10001211-1664373600-1664377200@cmsa.fas.harvard.edu
SUMMARY:Statistical mechanics of neural networks: From the geometry of high dimensional error landscapes to beating power law neural scaling
DESCRIPTION:New Technologies in Mathematics \nSpeaker: Surya Ganguli\, Stanford University \n\nTitle: Statistical mechanics of neural networks: From the geometry of high dimensional error landscapes to beating power law neural scaling\n\n\n\n\nAbstract: Statistical mechanics and neural network theory have long enjoyed fruitful interactions.  We will review some of our recent work in this area and then focus on two vignettes. First we will analyze the high dimensional geometry of neural network error landscapes that happen to arise as the classical limit of a dissipative many-body quantum optimizer.  In particular\, we will be able to use the Kac-Rice formula and the replica method to calculate the number\, location\, energy levels\, and Hessian eigenspectra of all critical points of any index.  Second we will review recent work on neural power laws\, which reveal that the error of many neural networks falls off as a power law with network size or dataset size.  Such power laws have motivated significant societal investments in large scale model training and data collection efforts.  Inspired by statistical mechanics calculations\, we show both in theory and in practice how we can beat neural power law scaling with respect to dataset size\, sometimes achieving exponential scaling\, by collecting small carefully curated datasets rather than large random ones.\n\n\n\nReferences: Y. Bahri\, J. Kadmon\, J. Pennington\, S. Schoenholz\, J. Sohl-Dickstein\, and S. Ganguli\, Statistical mechanics of deep learning\, Annual Reviews of Condensed Matter Physics\, 2020.\n\nSorscher\, Ben\, Robert Geirhos\, Shashank Shekhar\, Surya Ganguli\, and Ari S. Morcos. 2022. Beyond Neural Scaling Laws: Beating Power Law Scaling via Data Pruning https://arxiv.org/abs/2206.14486 (NeurIPS 2022).
URL:https://cmsa.fas.harvard.edu/event/8303/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-09.28.2022.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220914T140000
DTEND;TZID=America/New_York:20220914T150000
DTSTAMP:20260508T041655
CREATED:20230808T183823Z
LAST-MODIFIED:20240301T091205Z
UID:10001210-1663164000-1663167600@cmsa.fas.harvard.edu
SUMMARY:Breaking the one-mind-barrier in mathematics using formal verification
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Johan Commelin\, Mathematisches Institut\, Albert-Ludwigs-Universität Freiburg \nTitle: Breaking the one-mind-barrier in mathematics using formal verification \nAbstract: In this talk I will argue that formal verification helps break the one-mind-barrier in mathematics. Indeed\, formal verification allows a team of mathematicians to collaborate on a project\, without one person understanding all parts of the project. At the same time\, it also allows a mathematician to rapidly free mental RAM in order to work on a different component of a project. It thus also expands the one-mind-barrier. \nI will use the Liquid Tensor Experiment as an example\, to illustrate the above two points. This project recently finished the formalization of the main theorem of liquid vector spaces\, following up on a challenge by Peter Scholze. \nVideo
URL:https://cmsa.fas.harvard.edu/event/breaking-the-one-mind-barrier-in-mathematics-using-formal-verification/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220330T140000
DTEND;TZID=America/New_York:20220330T150000
DTSTAMP:20260508T041655
CREATED:20230808T183529Z
LAST-MODIFIED:20240515T202223Z
UID:10001209-1648648800-1648652400@cmsa.fas.harvard.edu
SUMMARY:Memorizing Transformers
DESCRIPTION:Speaker: Yuhuai Wu\, Stanford and Google \nTitle: Memorizing Transformers \nAbstract: Language models typically need to be trained or fine-tuned in order to acquire new knowledge\, which involves updating their weights. We instead envision language models that can simply read and memorize new data at inference time\, thus acquiring new knowledge immediately. In this talk\, I will discuss how we extend language models with the ability to memorize the internal representations of past inputs. We demonstrate that an approximate NN lookup into a non-differentiable memory of recent (key\, value) pairs improves language modeling across various benchmarks and tasks\, including generic webtext (C4)\, math papers (arXiv)\, books (PG-19)\, code (Github)\, as well as formal theorems (Isabelle). We show that the performance steadily improves when we increase the size of memory up to 262K tokens. We also find that the model is capable of making use of newly defined functions and theorems during test time.
URL:https://cmsa.fas.harvard.edu/event/3-30-2022-new-technologies-in-mathematics-seminar/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-03.30.2022-1583x2048-1.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220323T140000
DTEND;TZID=America/New_York:20220323T150000
DTSTAMP:20260508T041655
CREATED:20230808T183247Z
LAST-MODIFIED:20240515T202339Z
UID:10001208-1648044000-1648047600@cmsa.fas.harvard.edu
SUMMARY:Formal Mathematics Statement Curriculum Learning
DESCRIPTION:Speaker: Stanislas Polu\, OpenAI \nTitle: Formal Mathematics Statement Curriculum Learning \nAbstract: We explore the use of expert iteration in the context of language modeling applied to formal mathematics. We show that at same compute budget\, expert iteration\, by which we mean proof search interleaved with learning\, dramatically outperforms proof search only.  We also observe that when applied to a collection of formal statements of sufficiently varied difficulty\, expert iteration is capable of finding and solving a curriculum of increasingly difficult problems\,  without the need for associated ground-truth proofs. Finally\, by applying this expert iteration to a manually curated set of problem statements\, we achieve state-of-the-art on the miniF2F benchmark\,  automatically solving multiple challenging problems drawn from high school olympiads.
URL:https://cmsa.fas.harvard.edu/event/3-23-2022-new-technologies-in-mathematics-seminar/
LOCATION:MA
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-03.23.2022-1553x2048-1.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220309T140000
DTEND;TZID=America/New_York:20220309T140000
DTSTAMP:20260508T041655
CREATED:20230808T182829Z
LAST-MODIFIED:20240813T160025Z
UID:10001207-1646834400-1646834400@cmsa.fas.harvard.edu
SUMMARY:Machine Learning 30 STEM Courses in 12 Departments
DESCRIPTION:Speaker: Iddo Drori\, MIT EE&CS and Columbia School of Engineering \nTitle: Machine Learning 30 STEM Courses in 12 Departments \nAbstract: We automatically solve\, explain\, and generate university-level course problems from thirty STEM courses (at MIT\, Harvard\, and Columbia) for the first time.\nWe curate a new dataset of course questions and answers across a dozen departments: Aeronautics and Astronautics\, Chemical Engineering\, Chemistry\, Computer Science\, Economics\, Electrical Engineering\, Materials Science\, Mathematics\, Mechanical Engineering\, Nuclear Science\, Physics\, and Statistics.\nWe generate new questions and use them in a Columbia University course\, and perform A/B tests demonstrating that these machine generated questions are indistinguishable from human-written questions and that machine generated explanations are as useful as human-written explanations\, again for the first time.\nOur approach consists of five steps:\n(i) Given course questions\, turn them into programming tasks;\n(ii) Automatically generate programs from the programming tasks using a Transformer model\, OpenAI Codex\, pre-trained on text and fine-tuned on code;\n(iii) Execute the programs to obtain and evaluate the answers;\n(iv) Automatically explain the correct solutions using Codex;\n(v) Automatically generate new questions that are qualitatively indistinguishable from human-written questions.\nThis work is a significant step forward in applying machine learning for education\, automating a considerable part of the work involved in teaching.\nOur approach allows personalization of questions based on difficulty level and student backgrounds\, and scales up to a broad range of courses across the schools of engineering and science. \nThis is joint work with students and colleagues at MIT\, Harvard University\, Columbia University\, Worcester Polytechnic Institute\, and the University of Waterloo.
URL:https://cmsa.fas.harvard.edu/event/3-9-2022-new-technologies-in-mathematics-seminar/
LOCATION:MA
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-03.09.2022.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220302T140000
DTEND;TZID=America/New_York:20220302T150000
DTSTAMP:20260508T041655
CREATED:20230808T182233Z
LAST-MODIFIED:20240517T193649Z
UID:10001206-1646229600-1646233200@cmsa.fas.harvard.edu
SUMMARY:Scaling Laws and Their Implications for Coding AI
DESCRIPTION:Speaker: Jared Kaplan\, Johns Hopkins Dept. of Physics & Astronomy \nTitle: Scaling Laws and Their Implications for Coding AI \nAbstract:  Scaling laws and associated downstream trends can be used as an organizing principle when thinking about current and future ML progress.  I will briefly review scaling laws for generative models in a number of domains\, emphasizing language modeling.  Then I will discuss scaling results for transfer from natural language to code\, and results on python programming performance from “codex” and other models.  If there’s time I’ll discuss prospects for the future — limitations from dataset sizes\, and prospects for RL and other techniques.
URL:https://cmsa.fas.harvard.edu/event/3-2-2022-new-technologies-in-mathematics-seminar/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/03.2.2022-1553x2048-1.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220216T140000
DTEND;TZID=America/New_York:20220216T150000
DTSTAMP:20260508T041655
CREATED:20230808T181915Z
LAST-MODIFIED:20240515T205523Z
UID:10001205-1645020000-1645023600@cmsa.fas.harvard.edu
SUMMARY:Bootstrapping hyperbolic manifolds
DESCRIPTION:Speaker: James Bonifacio\, Cambridge DAMTP \nTitle: Bootstrapping hyperbolic manifolds \nAbstract: Hyperbolic manifolds are a class of Riemannian manifolds that are important in mathematics and physics\, playing a prominent role in topology\, number theory\, and string theory. Associated with a given hyperbolic metric is a sequence of numbers corresponding to the discrete eigenvalues of the Laplace-Beltrami operator. While these eigenvalues usually cannot be calculated exactly\, they can be found numerically and must also satisfy various bounds. In this talk\, I will discuss a new approach for finding numerical bounds on the eigenvalues of closed hyperbolic manifolds using general consistency conditions and semidefinite programming\, inspired by the approach of the conformal bootstrap from physics. Although these bootstrap bounds follow from seemingly trivial consistency conditions\, they are surprisingly strong and are sometimes almost saturated by actual manifolds; for example\, one such bound implies that the first nonzero eigenvalue of a closed hyperbolic surface must be less than 3.83890\, and this is very close to being saturated by a particular genus-2 surface called the Bolza surface. I will show how to derive this and other bounds and will discuss some possible future directions for this approach.
URL:https://cmsa.fas.harvard.edu/event/2-16-2022-new-technologies-in-mathematics-seminar/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-02.16.2022-1553x2048-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220209T140000
DTEND;TZID=America/New_York:20220209T150000
DTSTAMP:20260508T041655
CREATED:20230808T181534Z
LAST-MODIFIED:20240517T193404Z
UID:10001204-1644415200-1644418800@cmsa.fas.harvard.edu
SUMMARY:Toward Demystifying Transformers and Attention
DESCRIPTION:Speaker: Ben Edelman\, Harvard Computer Science \nTitle: Toward Demystifying Transformers and Attention \nAbstract: Over the past several years\, attention mechanisms (primarily in the form of the Transformer architecture) have revolutionized deep learning\, leading to advances in natural language processing\, computer vision\, code synthesis\, protein structure prediction\, and beyond. Attention has a remarkable ability to enable the learning of long-range dependencies in diverse modalities of data. And yet\, there is at present limited principled understanding of the reasons for its success. In this talk\, I’ll explain how attention mechanisms and Transformers work\, and then I’ll share the results of a preliminary investigation into why they work so well. In particular\, I’ll discuss an inductive bias of attention that we call sparse variable creation: bounded-norm Transformer layers are capable of representing sparse Boolean functions\, with statistical generalization guarantees akin to sparse regression.
URL:https://cmsa.fas.harvard.edu/event/2-9-2022-new-technologies-in-mathematics-seminar/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-02.09.2022-1553x2048-1.png
END:VEVENT
END:VCALENDAR