BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CMSA - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://cmsa.fas.harvard.edu
X-WR-CALDESC:Events for CMSA
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20270314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20271107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260318T140000
DTEND;TZID=America/New_York:20260318T150000
DTSTAMP:20260430T233425
CREATED:20260309T145907Z
LAST-MODIFIED:20260311T161332Z
UID:10003916-1773842400-1773846000@cmsa.fas.harvard.edu
SUMMARY:Dynamic reasoning
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Emmanuel Abbé\, EPFL\, Institute of Mathematics and School of Computer and Communication Sciences & Apple \nTitle: Dynamic reasoning \nAbstract: In the current AI landscape\, reasoning is frequently equated with the generation of intermediate “thinking traces”. However\, these traces are merely a mechanism\, not the ultimate objective.\nRelying solely on the presence of a trace can be deceptive\, as models often learn to mimic the format of reasoning while effectively overfitting to specific training distributions.\nTo build more robust and versatile reasoners\, we shift our focus to more specific structural properties of the thinking process\, in particular compositionality (inductive CoT\, AdaBack) and abstraction (AbstRaL).
URL:https://cmsa.fas.harvard.edu/event/newtech_31826-2/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-3.18.2026.docx-1-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260225T140000
DTEND;TZID=America/New_York:20260225T150000
DTSTAMP:20260430T233425
CREATED:20260210T192336Z
LAST-MODIFIED:20260210T194238Z
UID:10003894-1772028000-1772031600@cmsa.fas.harvard.edu
SUMMARY:Scaling Stochastic Momentum from Theory to LLMs
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Courtney Paquette\, McGill University \nTitle: Scaling Stochastic Momentum from Theory to LLMs \nAbstract: Given the massive scale of modern ML models\, we now often get only a single shot to train them effectively. This limits our ability to sweep architectures and hyperparameters\, making it essential to understand how learning algorithms scale so insights from small models transfer to large ones. \nIn this talk\, I present a framework for analyzing scaling laws of stochastic momentum methods using a power-law random features model\, leveraging tools from high-dimensional probability and random matrix theory. We show that standard SGD with momentum does not improve scaling exponents\, while dimension-adapted Nesterov acceleration (DANA)—which explicitly adapts momentum to model size and data/target complexity—achieves strictly better loss and compute scaling. DANA does this by rescaling its momentum parameters with dimension\, effectively matching the optimizer’s memory to the problem geometry. \nMotivated by these theoretical insights\, I introduce logarithmic-time scheduling for large language models and propose ADANA\, an AdamW-like optimizer with growing memory and explicit damping. Across transformer scales (45M to 2.6B parameters)\, ADANA yields up to 40% compute savings over tuned AdamW\, with gains that improve at scale. \nBased on joint work with Damien Ferbach\, Elliot Paquette\, Katie Everett\, and Gauthier Gidel.
URL:https://cmsa.fas.harvard.edu/event/newtech_22526/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-2.25.2026.docx-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260211T140000
DTEND;TZID=America/New_York:20260211T150000
DTSTAMP:20260430T233425
CREATED:20260126T152202Z
LAST-MODIFIED:20260126T212834Z
UID:10003878-1770818400-1770822000@cmsa.fas.harvard.edu
SUMMARY:ReLU and Softplus neural nets as zero-sum\, turn-based\, stopping games
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Yiannis Vlassopoulos\, Athena Research Center \nTitle: ReLU and Softplus neural nets as zero-sum\, turn-based\, stopping games \nAbstract: Neural networks are for the most part treated as black boxes. In an effort to begin elucidating the mathematical structure they encode\, we will explain how ReLU neural nets can be interpreted as zero-sum turn-based\, stopping games. The game runs in the opposite direction to the net. The input to the net is the terminal reward of the game\, the output of the net is the value of the game at its initial states. The bias at each neuron is used to define the reward and the weights are used to define state-transition probabilities. One player –Max– is trying to maximize reward and the other –Min-\, to minimize it. Every neuron gives rise to two game states\, one where Max plays and one where Min plays. In fact running the ReLU net is equivalent to the Shapley-Bellman backward recursion for the value of the game. As a corollary of this construction we get a path integral expression for the output of the net\, given input. Moreover using the fact that the Shapley operator is monotonic (with respect to the coordinate-wise order) we get bounds for the output of the net\, given bounds for the input. Adding an entropic regularization to the ReLU net game allows us to interpret Softplus neural nets as games in an analogous fashion.\nThis is joint work with Stéphane Gaubert. \n 
URL:https://cmsa.fas.harvard.edu/event/newtech_21126/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-2.11.2026.docx-1-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260204T140000
DTEND;TZID=America/New_York:20260204T150000
DTSTAMP:20260430T233425
CREATED:20250128T214750Z
LAST-MODIFIED:20260126T163315Z
UID:10003708-1770213600-1770217200@cmsa.fas.harvard.edu
SUMMARY:Automated Theory Formation and Interestingness in Mathematics
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: George Tsoukalas\, UT Austin Dept. of Computer Science and Google DeepMind. \nTitle: Automated Theory Formation and Interestingness in Mathematics \nAbstract: Advances in modern learning systems are beginning to demonstrate utility for select problems in research mathematics. A broader challenge is that of developing new theories automatically. This area has a rich history\, and is tied to some of the earliest work in AI. In particular\, a central question in this study was measuring the “interestingness” of mathematical concepts. \nIn this talk\, I will review this historical context and present our recent work on using large language models to synthesize interestingness measures that guide theory exploration in elementary number theory from scratch. I will conclude by outlining potential future research directions in this domain. \nJoint work done at UT Austin with Rahul Saha\, Amitayush Thakur\, Sabrina Reguyal\, and Swarat Chaudhuri.
URL:https://cmsa.fas.harvard.edu/event/newtech_2426/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-2.4.2026.docx-1-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251203T140000
DTEND;TZID=America/New_York:20251203T150000
DTSTAMP:20260430T233425
CREATED:20251110T191407Z
LAST-MODIFIED:20251110T225824Z
UID:10003833-1764770400-1764774000@cmsa.fas.harvard.edu
SUMMARY:Machine learning tools for mathematical discovery
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Adam Zsolt Wagner\, Google DeepMind \nTitle: Machine learning tools for mathematical discovery \nAbstract: I will discuss various ML tools we can use today to try to find interesting constructions to various mathematical problems. I will briefly mention simple reinforcement learning setups and PatternBoost\, but the talk will mainly focus on LLM-based tools such as FunSearch and AlphaEvolve. We will discuss the pros and cons of several of these methods\, and try to figure out which one is best for the problems we care about.\nJoint work with François Charton\, Jordan Ellenberg\, Bogdan Georgiev\, Javier Gómez-Serrano\, Terence Tao\, and Geordie Williamson.
URL:https://cmsa.fas.harvard.edu/event/newtech_12325/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-12.3.2025-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251105T140000
DTEND;TZID=America/New_York:20251105T150000
DTSTAMP:20260430T233425
CREATED:20251027T142022Z
LAST-MODIFIED:20251027T144043Z
UID:10003826-1762351200-1762354800@cmsa.fas.harvard.edu
SUMMARY:Discovery of unstable singularity with machine precision
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Yongji Wang\, NYU Courant Institute of Mathematical Sciences \nTitle: Discovery of unstable singularity with machine precision \nAbstract: Whether singularities can form in fluids remains a foundational unanswered question in mathematics. This phenomenon occurs when solutions to governing equations\, such as the 3D Euler equations\, develop infinite gradients from smooth initial conditions. Historically\, numerical approaches have primarily identified stable singularities. However\, these are not expected to exist for key open problems\, such as the boundary-free Euler and Navier-Stokes cases\, namely the Millennium Prize problem. For these problems\, the true challenge lies in finding unstable singularities\, which are exceptionally elusive\, as any tiny perturbation can divert the system from its blow-up trajectory. \nIn this talk\, I will present a new computational framework which has led to the first systematic discovery of new families of unstable singularities in various fluid equations. Our approach merges curated machine learning architectures with a multi-stage training scheme and a high-precision Gauss-Newton optimizer\, creating a powerful tool for navigating the complex landscape of nonlinear PDEs. Beyond discovering these singularities\, the precision of this method is another key breakthrough\, achieving unprecedented accuracies on the order of $O(10^{-13})$—a level constrained only by the round-off errors of the GPU hardware. This level of precision meets the stringent requirements for rigorous mathematical validation of the discovered solution via computer-assisted proofs\, offering a new pathway to resolving long-standing challenges in mathematical physics. \n 
URL:https://cmsa.fas.harvard.edu/event/newtech_11525/
LOCATION:CMSA 20 Garden Street Cambridge\, Massachusetts 02138 United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-11.5.2025-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251022T140000
DTEND;TZID=America/New_York:20251022T150000
DTSTAMP:20260430T233425
CREATED:20251008T132005Z
LAST-MODIFIED:20251008T133142Z
UID:10003808-1761141600-1761145200@cmsa.fas.harvard.edu
SUMMARY:The Carleson project: A collaborative formalization
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: María Inés de Frutos Fernández\, Mathematical Institute\, University of Bonn \nTitle: The Carleson project: A collaborative formalization \nAbstract: A well-known result in Fourier analysis establishes that the partial Fourier sums of a smooth periodic function $f$ converge uniformly to $f$\, but the situation is a lot more subtle for e.g. continuous functions. However\, in 1966 Carleson proved that they do converge at almost all points for $L^2$ periodic functions on the real line. Carleson’s proof is famously hard to read\, and there are no known easy proofs of this theorem. As a large collaborative project\, we have formalized in Lean a generalization of Carleson’s theorem in the setting of doubling metric measure spaces (proven in 2023)\, and Carleson’s original result as a corollary. In this talk I will give an overview of the project\, with a focus on how the collaboration was organized. \n 
URL:https://cmsa.fas.harvard.edu/event/newtech_102225/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-10.22.2025-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251008T140000
DTEND;TZID=America/New_York:20251008T150000
DTSTAMP:20260430T233425
CREATED:20250930T181425Z
LAST-MODIFIED:20251009T195959Z
UID:10003801-1759932000-1759935600@cmsa.fas.harvard.edu
SUMMARY:Understanding Optimization in Deep Learning with Central Flows
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Alex Damian\, Harvard \nTitle: Understanding Optimization in Deep Learning with Central Flows \nAbstract: Traditional theories of optimization cannot describe the dynamics of optimization in deep learning\, even in the simple setting of deterministic training. The challenge is that optimizers typically operate in a complex\, oscillatory regime called the “edge of stability.” In this paper\, we develop theory that can describe the dynamics of optimization in this regime. Our key insight is that while the *exact* trajectory of an oscillatory optimizer may be challenging to analyze\, the *time-averaged* (i.e. smoothed) trajectory is often much more tractable. To analyze an optimizer\, we derive a differential equation called a “central flow” that characterizes this time-averaged trajectory. We empirically show that these central flows can predict long-term optimization trajectories for generic neural networks with a high degree of numerical accuracy. By interpreting these central flows\, we are able to understand how gradient descent makes progress even as the loss sometimes goes up; how adaptive optimizers “adapt” to the local loss landscape; and how adaptive optimizers implicitly navigate towards regions where they can take larger steps. Our results suggest that central flows can be a valuable theoretical tool for reasoning about optimization in deep learning. \n 
URL:https://cmsa.fas.harvard.edu/event/newtech_10825/
LOCATION:Hybrid – G10
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-10.8.2025-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251001T140000
DTEND;TZID=America/New_York:20251001T150000
DTSTAMP:20260430T233425
CREATED:20250128T214901Z
LAST-MODIFIED:20251002T140605Z
UID:10003710-1759327200-1759330800@cmsa.fas.harvard.edu
SUMMARY:Tropicalized quantum field theory
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Michael Borinsky\, Perimeter Institute  \nTitle: Tropicalized quantum field theory \nAbstract: Quantum field theory (QFT) is one of the most accurate methods for making phenomenological predictions in physics\, but it has a significant drawback: obtaining concrete predictions from it is computationally very demanding. The standard perturbative approach expands an interacting QFT around a free QFT\, using Feynman diagrams. However\, the number of these diagrams grows superexponentially\, making the approach quickly infeasible. \nI will talk about arXiv:2508.14263\, which introduces an intermediate layer between free and interacting field theories: a tropicalized QFT. Often\, this tropicalized QFT can be solved exactly. The exact solution manifests as a non-linear recursion equation fulfilled by the expansion coefficients of the quantum effective action. Geometrically\, this recursion computes volumes of moduli spaces of metric graphs and is thereby analogous to Mirzakhani’s volume recursions on the moduli space of curves. Building on this exact solution\, an algorithm can be constructed that samples points from the moduli space of graphs approximately proportional to their perturbative contribution. Via a standard Monte Carlo approach we can evaluate the original QFT using this algorithm. Remarkably\, this algorithm requires only polynomial time and memory\, suggesting that perturbative quantum field theory computations actually lie in the polynomial-time complexity class\, while all known algorithms for evaluating individual Feynman integrals are at least exponential in time and memory. The (potential) capabilities of this approach are remarkable: For instance\, we can compute perturbative expansions of massive scalar D=3 phi^3 and D=4 phi^4 quantum field theories up to loop orders between 20 and 50 using a basic proof-of-concept implementation. These perturbative orders are completely inaccessible using a naive approach.
URL:https://cmsa.fas.harvard.edu/event/newtech_10125/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-10.1.2025.docx-1-scaled.png
END:VEVENT
END:VCALENDAR