BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CMSA - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:CMSA
X-ORIGINAL-URL:https://cmsa.fas.harvard.edu
X-WR-CALDESC:Events for CMSA
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240911T103000
DTEND;TZID=America/New_York:20240911T120000
DTSTAMP:20260424T115027
CREATED:20240907T155038Z
LAST-MODIFIED:20240911T210751Z
UID:10003444-1726050600-1726056000@cmsa.fas.harvard.edu
SUMMARY:Math and Machine Learning Program Discussion
DESCRIPTION:Math and Machine Learning Program Discussion \n 
URL:https://cmsa.fas.harvard.edu/event/mml_meeting_91124/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:MML Meeting
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240903T090000
DTEND;TZID=America/New_York:20241101T170000
DTSTAMP:20260424T115027
CREATED:20240105T033600Z
LAST-MODIFIED:20250305T175957Z
UID:10001112-1725354000-1730480400@cmsa.fas.harvard.edu
SUMMARY:Mathematics and Machine Learning Program
DESCRIPTION:Mathematics and Machine Learning Program \nDates: September 3 – November 1\, 2024 \nLocation: Harvard CMSA\, 20 Garden Street\, Cambridge\, MA 0213 \nMachine learning and AI are increasingly important tools in all fields of research. Recent milestones in machine learning for mathematics include data-driven discovery of theorems in knot theory and representation theory\, the discovery and proof of new singular solutions of the Euler equations\, new counterexamples and lower bounds in graph theory\, and more. Rigorous numerical methods and interactive theorem proving are playing an important part in obtaining these results. Conversely\, much of the spectacular progress in AI has a surprising simplicity at its core. Surely there are remarkable mathematical structures behind this\, yet to be elucidated. \nThe program will begin and end with two week-long workshops\, and will feature focus weeks on number theory\, knot theory\, graph theory\, rigorous numerics in PDE\, and interactive theorem proving\, as well as a course on geometric aspects of deep learning.\n\n  \nSeptember 3–5\, 2024: Opening Workshop: AI for Mathematicians\, with Leon Bottou\, François Charton\, David McAllester\, Adam Wagner and Geordie Williamson.   A series of six lectures covering logic and theorem proving\, AI methods\, theory of machine learning\, two lectures on case studies in math-AI\, and a lecture and discussion on open problems and the ethics of AI in science.\nOpening Workshop Youtube Playlist \n\nSeptember 6–7\, 2024: Big Data Conference \n  \nSeptember 9–13\, 2024: Applying Machine Learning to Math\, with François Charton and Geordie Williamson\nPublic Lecture September 12\, 2024: Geordie Williamson\, University of Sydney: Can AI help with hard mathematics? (Youtube link)\nThe focus of this week will be on practical examples and techniques for the mathematics researcher keen to explore or deepen their use of AI techniques. We will have talks showcasing easily stated problems\, on which machine learning techniques can be employed profitably. These provide excellent toy examples for generating intuition. We will also have expert talks on some of the technical subtleties which arise. There are several instances where the accepted heuristics emerging from the study of large language models (LLM) and image recognition don’t appear to apply on mathematics problems\, and we will try to highlight these subtleties.\nApplying Machine Learning to Math Youtube Playlist \n  \nSeptember 16–20\, 2024: Number theory\, with Drew Sutherland\nThe focus of this week will be on the use of ML as a tool for finding and understanding statistical patterns in number-theoretic datasets\, using the recently discovered (and still largely unexplained) “murmurations” in the distribution of Frobenius traces in families of elliptic curves and other arithmetic L-functions as a motivating example.\nNumber Theory Youtube Playlist \n  \nSeptember 23–27\, 2024: Knot theory\, with Sergei Gukov\nKnot theory is a great source of labeled data that can be synthetically generated. Moreover\, many outstanding problems in knot theory and low-dimensional topology can be formulated as decision and classification tasks\, e.g. “Is the knot 123_45 slice?” or “Can two given Kirby diagrams be related by a sequence of Kirby moves?” During this focus week we will explore various ways in which AI can be applied to problems in knot theory and how\, based on these applications\, mathematical reasoning can advance development of AI algorithms. Another goal will be to develop formal knot theory libraries (e.g. contributions to mathlib) and to apply AI models to formal proof systems\, in particular in the context of knot theory.\nKnot Theory Youtube Playlist \n  \nSeptember 30: Teaching and Machine Learning Panel Discussion\, 3:30-5:30 pm ET \n  \nSeptember 30–October 4\, 2024: Graph theory and combinatorics\, with Adam Wagner\nThis week\, we will consider how machine learning can help us solve problems in combinatorics and graph theory\, broadly interpreted\, in practice. The advantage of these fields is that they deal with finite objects that are simple to set up using computers\, and programs that work for one problem can often be adapted to work for several other related problems as well. Many times\, the best constructions for a problem are easy to interpret\, making it simpler to judge how well a particular algorithm is performing. On the other hand\, there are lots of open conjectures that are simple to state\, for which the best-known constructions are counterintuitive\, making it perhaps more likely that machine learning methods can spot patterns that are difficult to understand otherwise.\nGraph Theory and Combinatorics Youtube Playlist \n  \nOctober 7–11\, 2024: More number theory\, with Drew Sutherland\nThe focus of this week will be on the use of AI as a tool to search for and/or construct interesting or extremal examples in number theory and arithmetic geometry\, using LLM-based genetic algorithms\, generative adversarial networks\, game-theoretic methods\, and heuristic tree pruning as alternatives to conventional local search strategies.\nMore Number Theory Youtube Playlist \n  \nOctober 14 –18\, 2024: Interactive theorem proving\nThis week we will discuss the use of interactive theorem proving systems such as Lean\, Coq and Isabelle in mathematical research\, and AI systems which prove theorems and translate between informal and formal mathematics.\nInteractive Theorem Proving Youtube Playlist \n  \nOctober 21–25\, 2024: Numerical Partial Differential Equations (PDE)\, with Tristan Buckmaster and Javier Gomez-Serrano\nThe focus of this week will be on constructing solutions to partial differential equations and dynamical systems (finite and infinite dimensional) more broadly defined. We will discuss several toy problems and comment on issues like sampling strategies\, optimization algorithms\, ill-posedness\, or convergence. We will also outline strategies about further developing machine-learning findings and turn them into mathematical theorems via computer-assisted approaches.\nNumerical PDEs Youtube Playlist \n  \nOctober 28–Nov. 1\, 2024: Closing Workshop: The closing workshop will provide a forum for discussing the most current research in these areas\, including work in progress and recent results from program participants.\nMath and Machine Learning Closing Workshop Youtube Playlist \n  \nSeptember 3–Nov. 1: Graduate topics in deep learning theory (Boston College) taught by Eli Grigsby\, held at the CMSA Tuesdays and Thursdays 2:30–3:45 pm Eastern Time. Course website (link).\nGraduate Topics in Deep Learning Youtube Playlist \nCourse description: This is a course on geometric aspects of deep learning theory. Broadly speaking\, we’ll investigate the question: How might human-interpretable concepts be expressed in the geometry of their data encodings\, and how does this geometry interact with the computational units and higher-level algebraic structures in various parameterized function classes\, especially neural network classes? During the portion of the course Sep. 3-Nov. 1\, the course will be presented as part of the Math and Machine Learning program at the CMSA in Cambridge. During that portion\, we will focus on the current state of research on mechanistic interpretability of transformers\, the architecture underlying large language models like Chat-GPT. \n\n\n\n\nPrerequisites: This course is targeted to graduate students and advanced undergraduates in mathematics and theoretical computer science. No prior background in machine learning or learning theory will be assumed\, but I will assume a degree of mathematical maturity (at the level of–say—the standard undergraduate math curriculum+ first-year graduate geometry/topology sequence)\n\n\n\n\n\nProgram Organizers \n\nFrancois Charton (Meta AI)\nMichael R. Douglas (Harvard CMSA)\nMichael Freedman (Harvard CMSA)\nFabian Ruehle (Northeastern)\nGeordie Williamson (Univ. of Sydney)\n\n\nProgram Schedule  \nMonday\n10:30–noon\nOpen Discussion\nRoom G10 \n12:00–1:30 pm\nGroup lunch\nCMSA Common Room \nTuesday\n2:30–3:45 pm\nTopics in deep learning theory\nRoom G10 \n4:00–5:00 pm\nOpen Discussion/Tea\nCMSA Common Room \nWednesday\n10:30 am–12:00 pm\nOpen Discussion\nRoom G10 \n2:00–3:00 pm\nNew Technologies in Mathematics Seminar\nRoom G10 \nThursday\n2:30–3:45 pm\nTopics in deep learning theory\nRoom G10 \nFriday\n10:30 am–12:00 pm\nOpen Discussion\nRoom G10 \n\nHarvard CMSA thanks Mistral AI for a generous donation of computing credit.
URL:https://cmsa.fas.harvard.edu/event/mml2024/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Event,Programs
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Machine-Learning-Program-poster-1.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240320T140000
DTEND;TZID=America/New_York:20240320T150000
DTSTAMP:20260424T115027
CREATED:20240130T215041Z
LAST-MODIFIED:20240321T140550Z
UID:10001519-1710943200-1710946800@cmsa.fas.harvard.edu
SUMMARY:Solving olympiad geometry without human demonstrations
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Trieu H. Trinh\, Google Deepmind and NYU Dept. of Computer Science \nTitle: Solving olympiad geometry without human demonstrations \nAbstract: Proving mathematical theorems at the olympiad level represents a notable milestone in human-level automated reasoning\, owing to their reputed difficulty among the world’s best talents in pre-university mathematics. Current machine-learning approaches\, however\, are not applicable to most mathematical domains owing to the high cost of translating human proofs into machine-verifiable format. The problem is even worse for geometry because of its unique translation challenges\, resulting in severe scarcity of training data. We propose AlphaGeometry\, a theorem prover for Euclidean plane geometry that sidesteps the need for human demonstrations by synthesizing millions of theorems and proofs across different levels of complexity. AlphaGeometry is a neuro-symbolic system that uses a neural language model\, trained from scratch on our large-scale synthetic data\, to guide a symbolic deduction engine through infinite branching points in challenging problems. On a test set of 30 latest olympiad-level problems\, AlphaGeometry solves 25\, outperforming the previous best method that only solves ten problems and approaching the performance of an average International Mathematical Olympiad (IMO) gold medallist. Notably\, AlphaGeometry produces human-readable proofs\, solves all geometry problems in the IMO 2000 and 2015 under human expert evaluation and discovers a generalized version of a translated IMO theorem in 2004. \n 
URL:https://cmsa.fas.harvard.edu/event/nt-32024/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-03.20.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240306T140000
DTEND;TZID=America/New_York:20240306T150000
DTSTAMP:20260424T115027
CREATED:20240108T153449Z
LAST-MODIFIED:20240306T221235Z
UID:10001129-1709733600-1709737200@cmsa.fas.harvard.edu
SUMMARY:LILO: Learning Interpretable Libraries by Compressing and Documenting Code
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Gabe Grand\, MIT CSAIL and Dept. of EE&CS \nTitle: LILO: Learning Interpretable Libraries by Compressing and Documenting Code \nAbstract: While large language models (LLMs) now excel at code generation\, a key aspect of software development is the art of refactoring: consolidating code into libraries of reusable and readable programs. In this paper\, we introduce LILO\, a neurosymbolic framework that iteratively synthesizes\, compresses\, and documents code to build libraries tailored to particular problem domains. LILO combines LLM-guided program synthesis with recent algorithmic advances in automated refactoring from Stitch: a symbolic compression system that efficiently identifies optimal lambda abstractions across large code corpora. To make these abstractions interpretable\, we introduce an auto-documentation (AutoDoc) procedure that infers natural language names and docstrings based on contextual examples of usage. In addition to improving human readability\, we find that AutoDoc boosts performance by helping LILO’s synthesizer to interpret and deploy learned abstractions. We evaluate LILO on three inductive program synthesis benchmarks for string editing\, scene reasoning\, and graphics composition. Compared to existing neural and symbolic methods – including the state-of-the-art library learning algorithm DreamCoder – LILO solves more complex tasks and learns richer libraries that are grounded in linguistic knowledge.
URL:https://cmsa.fas.harvard.edu/event/nt-3624/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-03.06.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240221T140000
DTEND;TZID=America/New_York:20240221T150000
DTSTAMP:20260424T115027
CREATED:20240105T034012Z
LAST-MODIFIED:20240223T152643Z
UID:10001113-1708524000-1708527600@cmsa.fas.harvard.edu
SUMMARY:Computers and mathematics in partial differential equations: New developments and challenges
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Javier Gomez Serrano\, Brown University \nTitle: Computers and mathematics in partial differential equations: new developments and challenges \nAbstract: In this talk I will address the interaction between traditional and more modern mathematics and how computers have helped over the last decade providing rigorous (computer-assisted) proofs in the context of partial differential equations. I will also describe new exciting future directions in the field. No background is assumed.
URL:https://cmsa.fas.harvard.edu/event/nt-22124/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-02.21.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240214T140000
DTEND;TZID=America/New_York:20240214T150000
DTSTAMP:20260424T115027
CREATED:20240102T164110Z
LAST-MODIFIED:20240130T194619Z
UID:10000151-1707919200-1707922800@cmsa.fas.harvard.edu
SUMMARY:What Algorithms can Transformers Learn? A Study in Length Generalization
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Preetum Nakkiran\, Apple \nTitle: What Algorithms can Transformers Learn? A Study in Length Generalization \nAbstract: Large language models exhibit many surprising “out-of-distribution” generalization abilities\, yet also struggle to solve certain simple tasks like decimal addition. To clarify the scope of Transformers’ out-of-distribution generalization\, we isolate this behavior in a specific controlled setting: length-generalization on algorithmic tasks. Eg: Can a model trained on 10 digit addition generalize to 50 digit addition? For which tasks do we expect this to work? \nOur key tool is the recently-introduced RASP language (Weiss et al 2021)\, which is a programming language tailor-made for the Transformer’s computational model. We conjecture\, informally\, that: Transformers tend to length-generalize on a task if there exists a short RASP program that solves the task for all input lengths. This simple conjecture remarkably captures most known instances of length generalization on algorithmic tasks\, and can also inform design of effective scratchpads. Finally\, on the theoretical side\, we give a simple separating example between our conjecture and the “min-degree-interpolator” model of learning from Abbe et al. (2023). \nJoint work with Hattie Zhou\, Arwen Bradley\, Etai Littwin\, Noam Razin\, Omid Saremi\, Josh Susskind\, and Samy Bengio. To appear in ICLR 2024. \n 
URL:https://cmsa.fas.harvard.edu/event/nt21424/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-02.14.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240207T130000
DTEND;TZID=America/New_York:20240207T140000
DTSTAMP:20260424T115027
CREATED:20240102T163838Z
LAST-MODIFIED:20240207T220617Z
UID:10000149-1707310800-1707314400@cmsa.fas.harvard.edu
SUMMARY:Large language models\, mathematical discovery\, and search in the space of strategies: an anecdote
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Jordan Ellenberg (UW Madison) \nTitle: Large language models\, mathematical discovery\, and search in the space of strategies: an anecdote \nAbstract: I spent a portion of 2023 working with a team at DeepMind on the “cap set problem” – how large can a subset of (Z/3Z)^n be which contains no three terms which sum to zero? (I will explain\, for those not familiar with this problem\, something about the role it plays in combinatorics\, its history\, and why number theorists care about it a lot.) By now\, there are many examples of machine learning mechanisms being used to help generate interesting mathematical knowledge\, and especially interesting examples. This project used a novel protocol; instead of searching directly for large cap sets\, we used LLMs trained on code to search the space of short programs for those which\, when executed\, output large cap sets. One advantage is that a program is much more human-readable than a large collection of vectors over Z/3Z\, bringing us closer to the not-very-well-defined-but-important goal of “interpretable machine learning.” I’ll talk about what succeeded in this project (more than I expected!) what didn’t\, and what role I can imagine this approach to the math-ML interface playing in near-future mathematical practice. \nThe paper: https://www.nature.com/articles/s41586-023-06924-6 \n 
URL:https://cmsa.fas.harvard.edu/event/nt2724/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-02.07.24.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240124T140000
DTEND;TZID=America/New_York:20240124T150000
DTSTAMP:20260424T115027
CREATED:20240102T163450Z
LAST-MODIFIED:20240125T165049Z
UID:10000148-1706104800-1706108400@cmsa.fas.harvard.edu
SUMMARY:Approaches to the formalization of differential geometry
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Heather Macbeth\, Fordham University \nTitle: Approaches to the formalization of differential geometry \nAbstract: In the last five years\, there has been early work on the computer formalization of differential geometry. I will survey the projects I am aware of. I will also describe two projects of my own\, as case studies for typical challenges. The first (joint with Floris van Doorn) is an exercise in developing suitable abstractions\, the second (joint with Mario Carneiro) is an exercise in developing suitable automation.
URL:https://cmsa.fas.harvard.edu/event/nt-12424/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-01.24.2024.docx-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20231115T140000
DTEND;TZID=America/New_York:20231115T150000
DTSTAMP:20260424T115027
CREATED:20240222T094758Z
LAST-MODIFIED:20240222T095355Z
UID:10002797-1700056800-1700060400@cmsa.fas.harvard.edu
SUMMARY:On the Power of Forward pass through Transformer Architectures
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Abhishek Panigrahi\, Dept. of Computer Science\, Princeton University \nTitle: On the Power of Forward pass through Transformer Architectures \nAbstract: Highly trained transformers are capable of interesting computations as they infer for an input. The exact mechanism that these models use during forward passes is an interesting area of study. This talk studies two interesting phenomena. \nIn the first half\, we explore how and why pre-trained language models\, specifically BERT of moderate sizes\, can effectively learn linguistic structures like parse trees during pre-training. Specifically\, using synthetic data through PCFGs\, we show how moderate-sized transformers can perform forward-backward parsing\, also known as the inside-outside algorithm\, during inference. We further understand the role of the pre-training loss for the model to learn to parse during pre-training. \nIn the second half\, we consider in-context learning of large language models\, where they learn to reason on the fly. An ongoing hypothesis is that transformers simulate gradient descent at inference to perform in-context learning. We propose the Transformer in Transformer (TinT) framework\, which creates explicit transformer architectures that can simulate and fine-tune a small pre-trained transformer model during inference. E.g. a 1.3B parameter TINT model can simulate and fine-tune a 125 million parameter model in a single forward pass. This framework suggests that large transformers might execute intricate sub-routines during inference\, and provides insights for enhancing their capabilities through intelligent design considerations. \n 
URL:https://cmsa.fas.harvard.edu/event/nt-111523/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/NTM-11.15.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20231108T140000
DTEND;TZID=America/New_York:20231108T150000
DTSTAMP:20260424T115027
CREATED:20240222T095919Z
LAST-MODIFIED:20240222T095919Z
UID:10002798-1699452000-1699455600@cmsa.fas.harvard.edu
SUMMARY:Peano: Learning Formal Mathematical Reasoning Without Human Data
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Gabriel Poesia\, Dept. of Computer Science\, Stanford University \nTitle: Peano: Learning Formal Mathematical Reasoning Without Human Data \nAbstract: Peano is a theorem proving environment in which a computational agent can start tabula rasa in a new domain\, learn to solve problems through curiosity-driven exploration\, and create its own higher level actions. Gabriel will describe the system\, present case studies on learning to solve simple algebra problems from the Khan Academy platform\, and describe work on progress on learning the Natural Number Game\, a popular introduction to theorem proving in Lean for mathematicians. \n 
URL:https://cmsa.fas.harvard.edu/event/nt-11823/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/NTM-11.08.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20231027
DTEND;VALUE=DATE:20231029
DTSTAMP:20260424T115027
CREATED:20230904T060021Z
LAST-MODIFIED:20240624T182341Z
UID:10000002-1698364800-1698537599@cmsa.fas.harvard.edu
SUMMARY:Mathematics in Science: Perspectives and Prospects
DESCRIPTION:Mathematics in Science: Perspectives and Prospects\nA showcase of mathematics in interaction with physics\, computer science\, biology\, and beyond. \nOctober 27–28\, 2023 \nLocation: Harvard University Science Center Hall D & via Zoom. \nDirections and Recommended Lodging \nMathematics in Science: Perspectives and Prospects Youtube Playlist \n  \n\nSpeakers \n\nNima Arkani-Hamed (IAS)\nConstantinos Daskalakis (MIT)\nAlison Etheridge (Oxford)\nMike Freedman (Harvard CMSA)\nGreg Moore (Rutgers)\nBernd Sturmfels (MPI Leipzig)\n\n\nOrganizers \n\nMichael R. Douglas (Harvard CMSA)\nDan Freed (Harvard Math & CMSA)\nMike Hopkins (Harvard Math)\nCumrun Vafa (Harvard Physics)\nHorng-Tzer Yau (Harvard Math)\n\nSchedule\nFriday\, October 27\, 2023 \n\n\n\n2:00–3:15 pm\n\nGreg Moore (Rutgers) \nTitle: Remarks on Physical Mathematics \nAbstract: I will describe some examples of the vigorous modern dialogue between mathematics and theoretical physics (especially high energy and condensed matter physics). I will begin by recalling Stokes’ phenomenon and explain how it is related to some notable developments in quantum field theory from the past 30 years. Time permitting\, I might also say something about the dialogue between mathematicians working on the differential topology of four-manifolds and physicists working on supersymmetric quantum field theories. But I haven’t finished writing the talk yet\, so I don’t know how it will end any more than you do. \nSlides (PDF) \n \n\n\n\n3:15–3:45 pm\nBreak\n\n\n3:45–5:00 pm\n\nBernd Sturmfels (MPI Leipzig) \nTitle: Algebraic Varieties in Quantum Chemistry \nAbstract: We discuss the algebraic geometry behind coupled cluster (CC) theory of quantum many-body systems. The high-dimensional eigenvalue problems that encode the electronic Schroedinger equation are approximated by a hierarchy of polynomial systems at various levels of truncation. The exponential parametrization of the eigenstates gives rise to truncation varieties. These generalize Grassmannians in their Pluecker embedding. We explain how to derive Hamiltonians\, we offer a detailed study of truncation varieties and their CC degrees\, and we present the state of the art in solving the CC equations. This is joint work with Fabian Faulstich and Svala Sverrisdóttir. \nSlides (PDF) \n \n\n\n\n\n  \nSaturday\, October 28\, 2023 \n\n\n\n9:00 am\nBreakfast\n\n\n9:30–10:45 am\n\nMike Freedman (Harvard CMSA) \nTitle: ML\, QML\, and Dynamics: What mathematics can help us understand and advance machine learning? \nAbstract: Vannila deep neural nets DNN repeatedly stretch and fold. They are reminiscent of the logistic map and the Smale horseshoe.  What kind of dynamics is responsible for their expressivity and trainability. Is chaos playing a role? Is the Kolmogorov Arnold representation theorem relevant? Large language models are full of linear maps. Might we look for emergent tensor structures in these highly trained maps in analogy with emergent tensor structures at local minima of certain loss functions in high-energy physics. \nSlides (PDF) \n \n\n\n\n10:45–11:15 am\nBreak\n\n\n11:15 am–12:30 pmvia Zoom\n\nNima Arkani-Hamed (IAS) \nTitle: All-Loop Scattering as A Counting Problem \nAbstract: I will describe a new understanding of scattering amplitudes based on fundamentally combinatorial ideas in the kinematic space of the scattering data. I first discuss a toy model\, the simplest theory of colored scalar particles with cubic interactions\, at all loop orders and to all orders in the topological ‘t Hooft expansion. I will present a novel formula for loop-integrated amplitudes\, with no trace of the conventional sum over Feynman diagrams\, but instead determined by a beautifully simple counting problem attached to any order of the topological expansion. A surprisingly simple shift of kinematic variables converts this apparent toy model into the realistic physics of pions and Yang-Mills theory. These results represent a significant step forward in the decade-long quest to formulate the fundamental physics of the real world in a new language\, where the rules of spacetime and quantum mechanics\, as reflected in the principles of locality and unitarity\, are seen to emerge from deeper mathematical structures. \n \n\n\n\n12:30–2:00 pm\nLunch break\n\n\n2:00–3:15 pm\n\nConstantinos Daskalakis (MIT) \nTitle: How to train deep neural nets to think strategically \nAbstract: Many outstanding challenges in Deep Learning lie at its interface with Game Theory: from playing difficult games like Go to robustifying classifiers against adversarial attacks\, training deep generative models\, and training DNN-based models to interact with each other and with humans. In these applications\, the utilities that the agents aim to optimize are non-concave in the parameters of the underlying DNNs; as a result\, Nash equilibria fail to exist\, and standard equilibrium analysis is inapplicable. So how can one train DNNs to be strategic? What is even the goal of the training? We shed light on these challenges through a combination of learning-theoretic\, complexity-theoretic\, game-theoretic and topological techniques\, presenting obstacles and opportunities for Deep Learning and Game Theory going forward. \nSlides (PDF) \n \n\n\n\n3:15–3:45 pm\nBreak\n\n\n3:45–5:00 pm\n\nAlison Etheridge (Oxford) \nTitle: Modelling hybrid zones \nAbstract: Mathematical models play a fundamental role in theoretical population genetics and\, in turn\, population genetics provides a wealth of mathematical challenges. In this lecture we investigate the interplay between a particular (ubiquitous) form of natural selection\, spatial structure\, and\, if time permits\, so-called genetic drift. A simple mathematical caricature will uncover the importance of the shape of the domain inhabited by a species for the effectiveness of natural selection. \nSlides (PDF) \n \n\n\n\n\nLimited funding to help defray travel expenses is available for graduate students and recent PhDs. If you are a graduate student or postdoc and would like to apply for support\, please register above and send an email to mathsci2023@cmsa.fas.harvard.edu no later than October 9\, 2023. \nPlease include your name\, address\, current status\, university affiliation\, citizenship\, and area of study. F1 visa holders are eligible to apply for support. If you are a graduate student\, please send a brief letter of recommendation from a faculty member to explain the relevance of the conference to your studies or research. If you are a postdoc\, please include a copy of your CV. \n\n 
URL:https://cmsa.fas.harvard.edu/event/mathematics-in-science/
LOCATION:Harvard Science Center\, 1 Oxford Street\, Cambridge\, MA\, 02138
CATEGORIES:Conference,Event
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/MathScience2023Poster_8.5x11.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20231025T140000
DTEND;TZID=America/New_York:20231025T150000
DTSTAMP:20260424T115027
CREATED:20240223T105453Z
LAST-MODIFIED:20240223T105453Z
UID:10002853-1698242400-1698246000@cmsa.fas.harvard.edu
SUMMARY:Llemma: an open language model for mathematics
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Sean Welleck\, CMU\, Language Technologies Institute \nTitle: Llemma: an open language model for mathematics \nAbstract: We present Llemma: 7 billion and 34 billion parameter language models for mathematics. The Llemma models are initialized with Code Llama weights\, then trained on the Proof-Pile II\, a 55 billion token dataset of mathematical web data\, code\, and scientific papers. The resulting models show improved mathematical capabilities\, and can be adapted to various tasks. For instance\, Llemma outperforms the unreleased Minerva model suite on an equi-parameter basis\, and is capable of tool use and formal theorem proving without any further fine-tuning. We openly release all artifacts\, including the Llemma models\, the Proof-Pile II\, and code to replicate our experiments. We hope that Llemma serves as a platform for new research and tools at the intersection of generative models and mathematics. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/nt-102523/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/NTM-10.25.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20231018T140000
DTEND;TZID=America/New_York:20231018T150000
DTSTAMP:20260424T115027
CREATED:20240223T114049Z
LAST-MODIFIED:20240223T114049Z
UID:10002867-1697637600-1697641200@cmsa.fas.harvard.edu
SUMMARY:Physics of Language Models: Knowledge Storage\, Extraction\, and Manipulation
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Yuanzhi Li\, CMU Dept. of Machine Learning and Microsoft Research \nTitle: Physics of Language Models: Knowledge Storage\, Extraction\, and Manipulation \nAbstract: Large language models (LLMs) can memorize a massive amount of knowledge during pre-training\, but can they effectively use this knowledge at inference time? In this work\, we show several striking results about this question. Using a synthetic biography dataset\, we first show that even if an LLM achieves zero training loss when pretraining on the biography dataset\, it sometimes can not be finetuned to answer questions as simple as “What is the birthday of XXX” at all. We show that sufficient data augmentation during pre-training\, such as rewriting the same biography multiple times or simply using the person’s full name in every sentence\, can mitigate this issue. Using linear probing\, we unravel that such augmentation forces the model to store knowledge about a person in the token embeddings of their name rather than other locations. \nWe then show that LLMs are very bad at manipulating knowledge they learn during pre-training unless a chain of thought is used at inference time. We pretrained an LLM on the synthetic biography dataset\, so that it could answer “What is the birthday of XXX” with 100% accuracy.  Even so\, it could not be further fine-tuned to answer questions like “Is the birthday of XXX even or odd?” directly.  Even using Chain of Thought training data only helps the model answer such questions in a CoT manner\, not directly. \nWe will also discuss preliminary progress on understanding the scaling law of how large a language model needs to be to store X pieces of knowledge and extract them efficiently. For example\, is a 1B parameter language model enough to store all the knowledge of a middle school student? \n  \n 
URL:https://cmsa.fas.harvard.edu/event/nt-101823/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/NTM-10.18.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20231011T140000
DTEND;TZID=America/New_York:20231011T150000
DTSTAMP:20260424T115027
CREATED:20240223T114336Z
LAST-MODIFIED:20240223T114336Z
UID:10002868-1697032800-1697036400@cmsa.fas.harvard.edu
SUMMARY:LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Alex Gu\, MIT Dept. of EE&CS \nTitle: LeanDojo: Theorem Proving with Retrieval-Augmented Language Models \nAbstract: Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However\, existing methods are difficult to reproduce or build on\, due to private code\, data\, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. We introduce LeanDojo: an open-source Lean playground consisting of toolkits\, data\, models\, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs\, providing valuable data for premise selection: a key bottleneck in theorem proving. Using this data\, we develop ReProver (Retrieval-Augmented Prover): the first LLM-based prover that is augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo’s program analysis capability to identify accessible premises and hard negative examples\, which makes retrieval much more effective. Furthermore\, we construct a new benchmark consisting of 96\,962 theorems and proofs extracted from Lean’s math library. It features a challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation\, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research. \n 
URL:https://cmsa.fas.harvard.edu/event/nt-101123-2/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-10.11.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230927T140000
DTEND;TZID=America/New_York:20230927T150000
DTSTAMP:20260424T115027
CREATED:20240227T082824Z
LAST-MODIFIED:20240227T082824Z
UID:10002872-1695823200-1695826800@cmsa.fas.harvard.edu
SUMMARY:Transformers for maths\, and maths for transformers
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: François Charton\, Meta AI \nTitle:  Transformers for maths\, and maths for transformers \nAbstract: Transformers can be trained to solve problems of mathematics. I present two recent applications\, in mathematics and physics: predicting integer sequences\, and discovering the properties of scattering amplitudes in a close relative of Quantum ChromoDynamics. \nProblems of mathematics can also help understand transformers. Using two examples from linear algebra and integer arithmetic\, I show that model predictions can be explained\, that trained models do not confabulate\, and that carefully choosing the training distributions can help achieve better\, and more robust\, performance. \n  \n  \n 
URL:https://cmsa.fas.harvard.edu/event/nt-92723/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-09.27.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230920T140000
DTEND;TZID=America/New_York:20230920T150000
DTSTAMP:20260424T115027
CREATED:20240227T083355Z
LAST-MODIFIED:20240227T083355Z
UID:10002873-1695218400-1695222000@cmsa.fas.harvard.edu
SUMMARY:The TinyStories Dataset: How Small Can Language Models Be And Still Speak Coherent
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Ronen Eldan\, Microsoft Research \nTitle: The TinyStories Dataset: How Small Can Language Models Be And Still Speak Coherent \nAbstract: While generative language models exhibit powerful capabilities at large scale\, when either the model or the number of training steps is too small\, they struggle to produce coherent and fluent text: Existing models whose size is below a few billion parameters often do not generate coherent text beyond a few sentences. Hypothesizing that one of the main reasons for the strong reliance on size is the vast breadth and abundance of patterns in the datasets used to train those models\, this motivates the following question: Can we design a dataset that preserves the essential elements of natural language\, such as grammar\, vocabulary\, facts\, and reasoning\, but that is much smaller and more refined in terms of its breadth and diversity? \nIn this talk\, we introduce TinyStories\, a synthetic dataset of short stories that only contain words that 3 to 4-year-olds typically understand\, generated by GPT-3.5/4. We show that TinyStories can be used to train and analyze language models that are much smaller than the state-of-the-art models (below 10 million parameters)\, or have much simpler architectures (with only one transformer block)\, yet still produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar\, and demonstrate certain reasoning capabilities. We also show that the trained models are substantially more interpretable than larger ones\, as we can visualize and analyze the attention and activation patterns of the models\, and show how they relate to the generation process and the story content. We hope that TinyStories can facilitate the development\, analysis and research of language models\, especially for low-resource or specialized domains\, and shed light on the emergence of language capabilities in LMs. \n 
URL:https://cmsa.fas.harvard.edu/event/nt-92023/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-09.20.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230510T140000
DTEND;TZID=America/New_York:20230510T150000
DTSTAMP:20260424T115027
CREATED:20230809T105349Z
LAST-MODIFIED:20240228T104953Z
UID:10001225-1683727200-1683730800@cmsa.fas.harvard.edu
SUMMARY:Modern Hopfield Networks for Novel Transformer Architectures
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Dmitry Krotov\, IBM Research – Cambridge \nTitle: Modern Hopfield Networks for Novel Transformer Architectures \nAbstract: Modern Hopfield Networks or Dense Associative Memories are recurrent neural networks with fixed point attractor states that are described by an energy function. In contrast to conventional Hopfield Networks\, which were popular in the 1980s\, their modern versions have a very large memory storage capacity\, which makes them appealing tools for many problems in machine learning and cognitive and neurosciences. In this talk\, I will introduce an intuition and a mathematical formulation of this class of models and will give examples of problems in AI that can be tackled using these new ideas. Particularly\, I will introduce an architecture called Energy Transformer\, which replaces the conventional attention mechanism with a recurrent Dense Associative Memory model. I will explain the theoretical principles behind this architectural choice and show promising empirical results on challenging computer vision and graph network tasks.
URL:https://cmsa.fas.harvard.edu/event/nt-51023/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-05.10.23.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230426T140000
DTEND;TZID=America/New_York:20230426T150000
DTSTAMP:20260424T115027
CREATED:20230809T103350Z
LAST-MODIFIED:20240209T151145Z
UID:10001224-1682517600-1682521200@cmsa.fas.harvard.edu
SUMMARY:Toolformer: Language Models Can Teach Themselves to Use Tools
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Timo Schick\, Meta AI \nTitle: Toolformer: Language Models Can Teach Themselves to Use Tools \nAbstract: Language models exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions\, especially at scale. They also\, paradoxically\, struggle with basic functionality\, such as arithmetic or factual lookup\, where much simpler and smaller models excel. In this talk\, we show how these limitations can be overcome by letting language models teach themselves to use external tools via simple APIs. We discuss Toolformer\, a model trained to independently decide which APIs to call\, when to call them\, what arguments to pass\, and how to best incorporate the results into future token prediction. Through this\, it achieves substantially improved zero-shot performance across a variety of downstream tasks without sacrificing its core language modeling abilities. \n 
URL:https://cmsa.fas.harvard.edu/event/nt-42623/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-04.26.23.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230308T140000
DTEND;TZID=America/New_York:20230308T150000
DTSTAMP:20260424T115027
CREATED:20230808T190051Z
LAST-MODIFIED:20240223T154858Z
UID:10001812-1678284000-1678287600@cmsa.fas.harvard.edu
SUMMARY:How to steer foundation models?
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Jimmy Ba\, University of Toronto \nTitle: How to steer foundation models? \nAbstract: By conditioning on natural language instructions\, foundation models and large language models (LLMs) have displayed impressive capabilities as general-purpose computers. However\, task performance depends significantly on the quality of the prompt used to steer the model. Due to the lack of knowledge of how foundation models work\, most effective prompts have been handcrafted by humans through a demanding trial-and-error process. To reduce the human effort in this alignment process\, I will discuss a few approaches to steer these powerful models to excel in various downstream language and image tasks. \n 
URL:https://cmsa.fas.harvard.edu/event/nt-3823/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/03.08.2023.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230209T153000
DTEND;TZID=America/New_York:20230209T170000
DTSTAMP:20260424T115027
CREATED:20230705T052251Z
LAST-MODIFIED:20250328T200154Z
UID:10000063-1675956600-1675962000@cmsa.fas.harvard.edu
SUMMARY:Special Lectures on Machine Learning and Protein Folding
DESCRIPTION:The CMSA hosted a series of three 90-minute lectures on the subject of machine learning for protein folding. \nThursday Feb. 9\, Thursday Feb. 16\, & Thursday March 9\, 2023\, 3:30-5:00 pm ET \nLocation: G10\, CMSA\, 20 Garden Street\, Cambridge MA 02138 & via Zoom \n  \n  \n \nSpeaker: Nazim Bouatta\, Harvard Medical School \nAbstract: AlphaFold2\, a neural network-based model which predicts protein structures from amino acid sequences\, is revolutionizing the field of structural biology. This lecture series\, given by a leader of the OpenFold project which created an open-source version of AlphaFold2\, will explain the protein structure problem and the detailed workings of these models\, along with many new results and directions for future research. \nThursday\, Feb. 9\, 2023 \n\n\n\nThursday\, Feb. 9\, 2023 \n3:30–5:00 pm ET\nLecture 1: Machine learning for protein structure prediction\, Part 1: Algorithm space \nA brief intro to protein biology. AlphaFold2 impacts on experimental structural biology. Co-evolutionary approaches. Space of ‘algorithms’ for protein structure prediction. Proteins as images (CNNs for protein structure prediction). End-to-end differentiable approaches. Attention and long-range dependencies. AlphaFold2 in a nutshell. \n  \n \n\n\n\n  \n\n\n\nThursday\, Feb. 16\, 2023 \n3:30–5:00 pm ET\nLecture 2: Machine learning for protein structure prediction\, Part 2: AlphaFold2 architecture \nTurning the co-evolutionary principle into an algorithm: EvoFormer. Structure module and symmetry principles (equivariance and invariance). OpenFold: retraining AlphaFold2 and insights into its learning mechanisms and capacity for generalization. Applications of variants of AlphaFold2 beyond protein structure prediction: AlphaFold Multimer for protein complexes\, RNA structure prediction.\n\n\n\n  \n\n\n\nThursday\, March 9\, 2023 \n3:30–5:00 pm ET\nLecture 3: Machine learning for protein structure prediction\, Part 3: AlphaFold2 limitations and insights learned from OpenFold \nLimitations of AlphaFold2 and evolutionary ML pipelines. OpenFold: retraining AlphaFold2 yields new insights into its capacity for generalization.\n\n\n\n\n  \nBiography: Nazim Bouatta received his doctoral training in high-energy theoretical physics\, and transitioned to systems biology at Harvard Medical School\, where he received training in cellular and molecular biology in the group of Prof. Judy Lieberman. He is currently a Senior Research Fellow in the Laboratory of Systems Pharmacology led by Prof. Peter Sorger at Harvard Medical School\, and an affiliate of the Department of Systems Biology at Columbia\, in the group of Prof. Mohammed AlQuraishi. He is interested in applying machine learning\, physics\, and mathematics to biology at multiple scales. He recently co-supervised the OpenFold project\, an optimized\, trainable\, and completely open-source version of AlphaFold2. OpenFold has paved the way for many breakthroughs in biology\, including the release of the ESM Metagenomic Atlas containing over 600 million predicted protein structures. \n  \nChair: Michael Douglas (Harvard CMSA) \nModerators: Farzan Vafa & Sergiy Verstyuk (Harvard CMSA) \n\nLecture 1: Machine learning for protein structure prediction\, Part 1: Algorithm space\n \n  \nLecture 2: Machine learning for protein structure prediction\, Part 2: AlphaFold2 architecture\n \n  \nLecture 3: Machine learning for protein structure prediction\, Part 3: AlphaFold2 limitations and insights learned from OpenFold\n \n 
URL:https://cmsa.fas.harvard.edu/event/protein-folding/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Event,Special Lectures,Workshop
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Protein-Folding_8.5x11-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20221207T140000
DTEND;TZID=America/New_York:20221207T150000
DTSTAMP:20260424T115027
CREATED:20230808T185642Z
LAST-MODIFIED:20240116T060930Z
UID:10001215-1670421600-1670425200@cmsa.fas.harvard.edu
SUMMARY:How do Transformers reason? First principles via automata\, semigroups\, and circuits
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Cyril Zhang\, Microsoft Research \nTitle: How do Transformers reason? First principles via automata\, semigroups\, and circuits \nAbstract: The current “Transformer era” of deep learning is marked by the emergence of combinatorial and algorithmic reasoning capabilities in large sequence models\, leading to dramatic advances in natural language understanding\, program synthesis\, and theorem proving. What is the nature of these models’ internal representations (i.e. how do they represent the states and computational steps of the algorithms they execute)? How can we understand and mitigate their weaknesses\, given that they resist interpretation? In this work\, we present some insights (and many further mysteries) through the lens of automata and their algebraic structure. \nSpecifically\, we investigate the apparent mismatch between recurrent models of computation (automata & Turing machines) and Transformers (which are typically shallow and non-recurrent). Using tools from circuit complexity and semigroup theory\, we characterize shortcut solutions\, whereby a shallow Transformer with only o(T) layers can exactly replicate T computational steps of an automaton. We show that Transformers can efficiently represent these shortcuts in theory; furthermore\, in synthetic experiments\, standard training successfully finds these shortcuts. We demonstrate that shortcuts can lead to statistical brittleness\, and discuss mitigations. \nJoint work with Bingbin Liu\, Jordan Ash\, Surbhi Goel\, and Akshay Krishnamurthy.
URL:https://cmsa.fas.harvard.edu/event/nt-12722/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/12.07.2022.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20221026T140000
DTEND;TZID=America/New_York:20221026T150000
DTSTAMP:20260424T115027
CREATED:20230808T185319Z
LAST-MODIFIED:20240115T103149Z
UID:10001214-1666792800-1666796400@cmsa.fas.harvard.edu
SUMMARY:From Engine to Auto
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeakers: João Araújo\, Mathematics Department\, Universidade Nova de Lisboa and Michael Kinyon\, Department of Mathematics\, University of Denver \n\nTitle: From Engine to Auto \n\n\nAbstract: Bill McCune produced the program EQP that deals with first order logic formulas and in 1996 managed to solve Robbins’ Conjecture. This very powerful tool reduces to triviality any result that can be obtained by encoding the assumptions and the goals. The next step was to turn the program into a genuine assistant for the working mathematician: find ways to help the prover with proofs; reduce the lengths of the automatic proofs to better crack them;  solve problems in higher order logic; devise tools that autonomously prove results of a given type\, etc.\n\nIn this talk we are going to show some of the tools and strategies we have been producing. There will be real illustrations of theorems obtained for groups\, loops\, semigroups\, logic algebras\, lattices and generalizations\, quandles\, and many more.
URL:https://cmsa.fas.harvard.edu/event/nt-102622/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-10.26.2022.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20221019T140000
DTEND;TZID=America/New_York:20221019T150000
DTSTAMP:20260424T115027
CREATED:20230808T184955Z
LAST-MODIFIED:20240215T095357Z
UID:10001213-1666188000-1666191600@cmsa.fas.harvard.edu
SUMMARY:Towards Faithful Reasoning Using Language Models
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Antonia Creswell\, DeepMind \nTitle: Towards Faithful Reasoning Using Language Models \nAbstract: Language models are showing impressive performance on many natural language tasks\, including question-answering. However\, language models – like most deep learning models – are black boxes. We cannot be sure how they obtain their answers. Do they reason over relevant knowledge to construct an answer or do they rely on prior knowledge – baked into their weights – which may be biased? An alternative approach is to develop models whose output is a human interpretable\, faithful reasoning trace leading to an answer. In this talk we will characterise faithful reasoning in terms of logically valid reasoning and demonstrate where current reasoning models fall short. Following this\, we will introduce Selection-Inference\, a faithful reasoning model\, whose causal structure mirrors the requirements for valid reasoning. We will show that our model not only produces more accurate reasoning traces but also improves final answer accuracy. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/nt-101922/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/10.19.2022.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20221005T140000
DTEND;TZID=America/New_York:20221005T160000
DTSTAMP:20260424T115027
CREATED:20230808T184616Z
LAST-MODIFIED:20240214T110102Z
UID:10001212-1664978400-1664985600@cmsa.fas.harvard.edu
SUMMARY:Minerva: Solving Quantitative Reasoning Problems with Language Models
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Guy Gur-Ari\, Google Research \nTitle: Minerva: Solving Quantitative Reasoning Problems with Language Models \nAbstract: Quantitative reasoning tasks which can involve mathematics\, science\, and programming are often challenging for machine learning models in general and for language models in particular. We show that transformer-based language models obtain significantly better performance on math and science questions when trained in an unsupervised way on a large\, math-focused dataset. Performance can be further improved using prompting and sampling techniques including chain-of-thought and majority voting. Minerva\, a model that combines these techniques\, achieves SOTA on several math and science benchmarks. I will describe the model\, its capabilities and limitations.
URL:https://cmsa.fas.harvard.edu/event/minerva-solving-quantitative-reasoning-problems-with-language-models/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/10.05.2022.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220914T140000
DTEND;TZID=America/New_York:20220914T150000
DTSTAMP:20260424T115027
CREATED:20230808T183823Z
LAST-MODIFIED:20240301T091205Z
UID:10001210-1663164000-1663167600@cmsa.fas.harvard.edu
SUMMARY:Breaking the one-mind-barrier in mathematics using formal verification
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Johan Commelin\, Mathematisches Institut\, Albert-Ludwigs-Universität Freiburg \nTitle: Breaking the one-mind-barrier in mathematics using formal verification \nAbstract: In this talk I will argue that formal verification helps break the one-mind-barrier in mathematics. Indeed\, formal verification allows a team of mathematicians to collaborate on a project\, without one person understanding all parts of the project. At the same time\, it also allows a mathematician to rapidly free mental RAM in order to work on a different component of a project. It thus also expands the one-mind-barrier. \nI will use the Liquid Tensor Experiment as an example\, to illustrate the above two points. This project recently finished the formalization of the main theorem of liquid vector spaces\, following up on a challenge by Peter Scholze. \nVideo
URL:https://cmsa.fas.harvard.edu/event/breaking-the-one-mind-barrier-in-mathematics-using-formal-verification/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220415T090000
DTEND;TZID=America/New_York:20220415T130000
DTSTAMP:20260424T115027
CREATED:20230705T083343Z
LAST-MODIFIED:20240229T102446Z
UID:10000088-1650013200-1650027600@cmsa.fas.harvard.edu
SUMMARY:Workshop on Machine Learning and Mathematical Conjecture
DESCRIPTION:On April 15\, 2022\, the CMSA will hold a one-day workshop\, Machine Learning and Mathematical Conjecture\, related to the New Technologies in Mathematics Seminar Series. \nLocation: Room G10\, 20 Garden Street\, Cambridge\, MA 02138. \nOrganizers: Michael R. Douglas (CMSA/Stony Brook/IAIFI) and Peter Chin (CMSA/BU). \nMachine learning has driven many exciting recent scientific advances. It has enabled progress on long-standing challenges such as protein folding\, and it has helped mathematicians and mathematical physicists create new conjectures and theorems in knot theory\, algebraic geometry\, and representation theory. \nAt this workshop\, we will bring together mathematicians\, theoretical physicists\, and machine learning researchers to review the state of the art in machine learning\, discuss how ML results can be used to inspire\, test and refine precise conjectures\, and identify mathematical questions which may be suitable for this approach. \nSpeakers: \n\nJames Halverson\, Northeastern University Dept. of Physics and IAIFI\nFabian Ruehle\, Northeastern University Dept. of Physics and Mathematics and IAIFI\nAndrew Sutherland\, MIT Department of Mathematics\n\n  \n \n  \n  \n \n 
URL:https://cmsa.fas.harvard.edu/event/workshop-on-machine-learning-and-mathematical-conjecture/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Event,Workshop
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/Machine-Learning.png
END:VEVENT
END:VCALENDAR