BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CMSA - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:CMSA
X-ORIGINAL-URL:https://cmsa.fas.harvard.edu
X-WR-CALDESC:Events for CMSA
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250915T090000
DTEND;TZID=America/New_York:20250918T170000
DTSTAMP:20260430T180347
CREATED:20250710T134311Z
LAST-MODIFIED:20250930T154307Z
UID:10003755-1757926800-1758214800@cmsa.fas.harvard.edu
SUMMARY:The Geometry of Machine Learning
DESCRIPTION:The Geometry of Machine Learning \nDates: September 15–18\, 2025 \nLocation: Harvard CMSA\, Room G10\, 20 Garden Street\, Cambridge MA 02138 \nDespite the extraordinary progress in large language models\, mathematicians suspect that other dimensions of intelligence must be defined and simulated to complete the picture. Geometric and symbolic reasoning are among these. In fact\, there seems to be much to learn about existing ML by considering it from a geometric perspective\, e.g. what is happening to the data manifold as it moves through a NN?  How can geometric and symbolic tools be interfaced with LLMs? A more distant goal\, one that seems only approachable through AIs\, would be to gain some insight into the large-scale structure of mathematics as a whole: the geometry of math\, rather than geometry as a subject within math. This conference is intended to begin a discussion on these topics. \nSpeakers \n\nMaissam Barkeshli\, University of Maryland\nEve Bodnia\, Logical Intelligence\nAdam Brown\, Stanford\nBennett Chow\, USCD & IAS\nMichael Freedman\, Harvard CMSA\nElliot Glazer\, Epoch AI\nJames Halverson\, Northeastern\nJesse Han\, Math Inc.\nJunehyuk Jung\, Brown University\nAlex Kontorovich\, Rutgers University\nYann Lecun\, New York University & META*\nJared Duker Lichtman\, Stanford  & Math Inc.\nBrice Ménard\, Johns Hopkins\nMichael Mulligan\, UCR & Logical Intelligence\nPatrick Shafto\, DARPA & Rutgers University\n\nOrganizers: Michael R. Douglas (CMSA) and Mike Freedman (CMSA) \n  \nGeometry of Machine Learning Youtube Playlist \n  \nSchedule \nMonday\, Sep. 15\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nJames Halverson\, Northeastern \nTitle: Sparsity and Symbols with Kolmogorov-Arnold Networks \nAbstract: In this talk I’ll review Kolmogorov-Arnold nets\, as well as new theory and applications related to sparsity and symbolic regression\, respectively.  I’ll review essential results regarding KANs\, show how sparsity masks relate deep nets and KANs\, and how KANs can be utilized alongside multimodal language models for symbolic regression. Empirical results will necessitate a few slides\, but the bulk will be chalk.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nMaissam Barkeshli\, University of Maryland \nTitle: Transformers and random walks: from language to random graphs \nAbstract: The stunning capabilities of large language models give rise to many questions about how they work and how much more capable they can possibly get. One way to gain additional insight is via synthetic models of data with tunable complexity\, which can capture the basic relevant structures of real data. In recent work we have focused on sequences obtained from random walks on graphs\, hypergraphs\, and hierarchical graphical structures. I will present some recent empirical results for work in progress regarding how transformers learn sequences arising from random walks on graphs. The focus will be on neural scaling laws\, unexpected temperature-dependent effects\, and sample complexity.\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–1:00 pm\nAdam Brown\, Stanford \nTitle: LLMs\, Reasoning\, and the Future of Mathematical Sciences \nAbstract: Over the last half decade\, the mathematical capabilities of large language models (LLMs) have leapt from preschooler to undergraduate and now beyond. This talk reviews recent progress\, and speculates as to what it will mean for the future of mathematical sciences if these trends continue.\n\n\n\n  \nTuesday\, Sep. 16\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nJunehyuk Jung\, Brown University \nTitle: AlphaGeometry: a step toward automated math reasoning \nAbstract: Last summer\, Google DeepMind’s AI systems made headlines by achieving Silver Medal level performance on the notoriously challenging International Mathematical Olympiad (IMO) problems. For instance\, AlphaGeometry 2\, one of these remarkable systems\, solved the geometry problem in a mere 19 seconds! \nIn this talk\, we will delve into the inner workings of AlphaGeometry\, exploring the innovative techniques that enable it to tackle intricate geometric puzzles. We will uncover how this AI system combines the power of neural networks with symbolic reasoning to discover elegant solutions.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nBennett Chow\, USCD and IAS \nTitle: Ricci flow as a test for AI\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–1:00 pm\nJared Duker Lichtman\, Stanford & Math Inc. and Jesse Han\, Math Inc. \nTitle: Gauss – towards autoformalization for the working mathematician \nAbstract: In this talk we’ll highlight some recent formalization progress using a new agent – Gauss. We’ll outline a recent Lean proof of the Prime Number Theorem in strong form\, completing a challenge set in January 2024 by Alex Kontorovich and Terry Tao. We hope Gauss will help assist working mathematicians\, especially those who do not write formal code themselves.\n\n\n5:00–6:00 pm\nSpecial Lecture: Yann LeCun\, Science Center Hall C\n\n\n\n  \nWednesday\, Sep. 17\, 2025 \n\n\n\n8:30–9:00 am\nRefreshments\n\n\n9:00–10:00 am\nMichael Mulligan\, UCR and Logical Intelligence \nTitle: Spontaneous Kolmogorov-Arnold Geometry in Vanilla Fully-Connected Neural Networks \nAbstract: The Kolmogorov-Arnold (KA) representation theorem constructs universal\, but highly non-smooth inner functions (the first layer map) in a single (non-linear) hidden layer neural network. Such universal functions have a distinctive local geometry\, a “texture\,” which can be characterized by the inner function’s Jacobian\, $J(\mathbf{x})$\, as $\mathbf{x}$ varies over the data. It is natural to ask if this distinctive KA geometry emerges through conventional neural network optimization. We find that indeed KA geometry often does emerge through the process of training vanilla single hidden layer fully-connected neural networks (MLPs). We quantify KA geometry through the statistical properties of the exterior powers of $J(\mathbf{x})$: number of zero rows and various observables for the minor statistics of $J(\mathbf{x})$\, which measure the scale and axis alignment of $J(\mathbf{x})$. This leads to a rough phase diagram in the space of function complexity and model hyperparameters where KA geometry occurs. The motivation is first to understand how neural networks organically learn to prepare input data for later downstream processing and\, second\, to learn enough about the emergence of KA geometry to accelerate learning through a timely intervention in network hyperparameters. This research is the “flip side” of KA-Networks (KANs). We do not engineer KA into the neural network\, but rather watch KA emerge in shallow MLPs.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nEve Bodnia\, Logical Intelligence \nTitle: \nAbstract: We introduce a method of topological analysis on spiking correlation networks in neurological systems. This method explores the neural manifold as in the manifold hypothesis\, which posits that information is often represented by a lower-dimensional manifold embedded in a higher-dimensional space. After collecting neuron activity from human and mouse organoids using a micro-electrode array\, we extract connectivity using pairwise spike-timing time correlations\, which are optimized for time delays introduced by synaptic delays. We then look at network topology to identify emergent structures and compare the results to two randomized models – constrained randomization and bootstrapping across datasets. In histograms of the persistence of topological features\, we see that the features from the original dataset consistently exceed the variability of the null distributions\, suggesting that the observed topological features reflect significant correlation patterns in the data rather than random fluctuations. In a study of network resiliency\, we found that random removal of 10 % of nodes still yielded a network with a lesser but still significant number of topological features in the homology group H1 (counts 2-dimensional voids in the dataset) above the variability of our constrained randomization model; however\, targeted removal of nodes in H1 features resulted in rapid topological collapse\, indicating that the H1 cycles in these brain organoid networks are fragile and highly sensitive to perturbations. By applying topological analysis to neural data\, we offer a new complementary framework to standard methods for understanding information processing across a variety of complex neural systems.\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–1:00 pm\nAlex Kontorovich\, Rutgers University \nTitle: The Shape of Math to Come \nAbstract: We will discuss some ongoing experiments that may have meaningful impact on what working in research mathematics might look like in a decade (if not sooner).\n\n\n5:00–6:00 pm\nMike Freedman Millennium Lecture: The Poincaré Conjecture and Mathematical Discovery (Science Center Hall D)\n\n\n\n  \nThursday\, Sep. 18\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nElliott Glazer\, Epoch AI \nTitle: FrontierMath to Infinity \nAbstract: I will discuss FrontierMath\, a mathematical problem solving benchmark I developed over the past year\, including its design philosophy and what we’ve learned about AI’s trajectory from it. I will then look much further out\, speculate about what a “perfectly efficient” mathematical intelligence should be capable of\, and discuss how high-ceiling math capability metrics can illuminate the path towards that ideal.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nBrice Ménard\, Johns Hopkins \nTitle:Demystifying the over-parametrization of neural networks \nAbstract: I will show how to estimate the dimensionality of neural encodings (learned weight structures) to assess how many parameters are effectively used by a neural network. I will then show how their scaling properties provide us with fundamental exponents on the learning process of a given task. I will comment on connections to thermodynamics.\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–12:30 pm\nPatrick Shafto\, Rutgers \nTitle: Math for AI and AI for Math \nAbstract: I will briefly discuss two DARPA programs aiming to deepen connections between mathematics and AI\, specifically through geometric and symbolic perspectives. The first aims for mathematical foundations for understanding the behavior and performance of modern AI systems such as Large Language Models and Diffusion models. The second aims to develop AI for pure mathematics through an understanding of abstraction\, decomposition\, and formalization. I will close with some thoughts on the coming convergence between AI and math.\n\n\n12:30–12:45 pm\nBreak\n\n\n12:45–2:00 pm\nMike Freedman\, Harvard CMSA \nTitle: How to think about the shape of mathematics \nFollowed by group discussion \n \n\n\n\n  \n  \n  \nSupport provided by Logical Intelligence. \n \n  \n 
URL:https://cmsa.fas.harvard.edu/event/mlgeometry/
LOCATION:CMSA 20 Garden Street Cambridge\, Massachusetts 02138 United States
CATEGORIES:Conference,Event
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/GML_2025.7-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250916T170000
DTEND;TZID=America/New_York:20250916T180000
DTSTAMP:20260430T180347
CREATED:20250807T142820Z
LAST-MODIFIED:20250922T134159Z
UID:10003760-1758042000-1758045600@cmsa.fas.harvard.edu
SUMMARY:Geometry of Machine Learning Special Lecture: Yann LeCun
DESCRIPTION:Geometry of Machine Learning Special Lecture: Yann LeCun \nTitle: Self-Supervised Learning\, JEPA\, World Models\, and the future of AI \nDate: Tuesday\, Sep. 16\, 2025 \nTime: 5:00 pm ET \nLocation: Harvard Science Center\, Hall C & via Zoom Webinar
URL:https://cmsa.fas.harvard.edu/event/lecun91625/
LOCATION:Harvard Science Center\, 1 Oxford Street\, Cambridge\, MA\, 02138
CATEGORIES:Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/YannLeCun_GML-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250917T170000
DTEND;TZID=America/New_York:20250917T180000
DTSTAMP:20260430T180347
CREATED:20250311T134916Z
LAST-MODIFIED:20251010T115024Z
UID:10003656-1758128400-1758132000@cmsa.fas.harvard.edu
SUMMARY:Millennium Prize Problems Lecture - Michael Freedman: The Poincaré Conjecture and Mathematical Discovery  
DESCRIPTION:Millennium Prize Problems Lecture\nDate: September 17\, 2025 \nLocation: Harvard Science Center Hall D & via Zoom Webinar \nTime: 5:00–6:00 pm \nSpeaker: Michael Freedman\, Harvard CMSA and Logical Intelligence  \nTitle: The Poincaré Conjecture and Mathematical Discovery   \nAbstract: The AI age requires us to re-examine what mathematics is about. The Seven Millenium Problems provide an ideal lens for doing so. Five of the seven are core mathematical questions\, two are meta-mathematical – asking about the scope of mathematics. The Poincare conjecture represents one of the core subjects\, manifold topology. I’ll explain what it is about\, its broader context\, and why people cared so much about finding a solution\, which ultimately arrived through the work of R. Hamilton and G. Perelman. Although stated in manifold topology\, the proof requires vast developments in the theory of parabolic partial differential equations\, some of which I will sketch. Like most powerful techniques\, the methods survive their original objectives and are now deployed widely in both three- and four-dimensional manifold topology.  \n  \nRead more about the Poincaré Conjecture at the Clay Math website. \nOrganizers: Martin Bridson\, Clay Mathematics Institute | Dan Freed\, Harvard University and CMSA | Mike Hopkins\, Harvard University \n\n                   \n\nMillennium Prize Problems Lecture Series
URL:https://cmsa.fas.harvard.edu/event/clay_91725/
LOCATION:Harvard Science Center Hall D\, 1 Oxford Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Millennium Prize Problems Lecture,Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Freedman_web_ad.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250918T160000
DTEND;TZID=America/New_York:20250918T170000
DTSTAMP:20260430T180347
CREATED:20250904T162209Z
LAST-MODIFIED:20250910T174655Z
UID:10003777-1758211200-1758214800@cmsa.fas.harvard.edu
SUMMARY:Moduli spaces of 4d N=2 quantum field theories
DESCRIPTION:Differential Geometry and Physics Seminar  \nSpeaker: Robert Moscrop\, CMSA \nTitle: Moduli spaces of 4d N=2 quantum field theories \nAbstract: Supersymmetry endows quantum field theories with several rich algebraic and geometric structures associated to their moduli space of vacua\, providing powerful tools to study such theories non-perturbatively. For example\, in four-dimensional theories with eight supercharges\, the low energy dynamics of the theory is captured by an algebraic completely integrable system whose base is the Coulomb branch– a particular distinguished submanifold of the moduli space. This structure is so tightly constrained\, that there is an ongoing program to classify such theories purely by understanding their Coulomb branch geometry. In this talk\, I will give a gentle introduction to the geometry of the moduli spaces of 4d N=2 theories and\, time permitting\, discuss some recent results showcasing how the geometry of the Coulomb branch can be used to constrain certain physical quantities of the theory. \n  \n  \n 
URL:https://cmsa.fas.harvard.edu/event/dgphys_91825/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Differential Geometry and Physics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/DG-Physics-Seminar-9.18.2025-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250919T120000
DTEND;TZID=America/New_York:20250919T130000
DTSTAMP:20260430T180347
CREATED:20241211T195345Z
LAST-MODIFIED:20250918T184123Z
UID:10003648-1758283200-1758286800@cmsa.fas.harvard.edu
SUMMARY:Top-Down Perspectives on Symmetry Theories
DESCRIPTION:Member Seminar \nSpeaker: Max Hubner \nTitle: Top-Down Perspectives on Symmetry Theories \nAbstract: I will review the construction and utility of symmetry theories for string constructed quantum field theories. Symmetry theories are extra-dimensional auxiliary theories separating aspects of a quantum field theory’s symmetries from many of its more messy features. For QFTs with extra-dimensional string constructions the symmetry theory derives directly from the extra-dimensional geometry. This perspective allows for the study of symmetries of famously string engineered systems\, such as SCFTs in 5D and 6D\, which we will discuss on an example by example basis.
URL:https://cmsa.fas.harvard.edu/event/member-seminar-91925/
LOCATION:Common Room\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Member Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Member-Seminar-9.19.25-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250922T150000
DTEND;TZID=America/New_York:20250922T160000
DTSTAMP:20260430T180348
CREATED:20250826T190916Z
LAST-MODIFIED:20250917T134457Z
UID:10003761-1758553200-1758556800@cmsa.fas.harvard.edu
SUMMARY:Non-Supersymmetric Orbifolds\, Quivers and Chen-Ruan Orbifold Cohomology
DESCRIPTION:Quantum Field Theory and Physical Mathematics Seminar \nSpeaker: Max Hübner (Uppsala & CMSA) \nTitle: Non-Supersymmetric Orbifolds\, Quivers and Chen-Ruan Orbifold Cohomology \nAbstract: We consider D3-brane probes of non-supersymmetric orbifolds and IIA on the same class of non-supersymmetric orbifolds. Both setups are characterized\, in part\, by quivers (which in the latter case relate for example to D0-brane probes) from which symmetries constraining the scale-dependence and tachyonic instabilities of the two systems\, respectively\, can be derived. We demonstrate that these considerations can be matched via a geometric analysis of the asymptotic boundary of the relevant orbifolds\, in all cases\, via considerations centered on Chen-Ruan orbifold cohomology.
URL:https://cmsa.fas.harvard.edu/event/qft_92225/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Quantum Field Theory and Physical Mathematics
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-QFT-and-Physical-Mathematics-9.22.25.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250922T163000
DTEND;TZID=America/New_York:20250922T173000
DTSTAMP:20260430T180348
CREATED:20250826T191126Z
LAST-MODIFIED:20250914T170550Z
UID:10003732-1758558600-1758562200@cmsa.fas.harvard.edu
SUMMARY:Turbulent Mixing and Antagonistic Microorganisms
DESCRIPTION:Colloquium \nSpeaker: David Nelson\, Harvard \nTitle: Turbulent Mixing and Antagonistic Microorganisms \nAbstract: Unlike coffee and cream that homogenize when stirred\, growing micro-organisms (e.g.\, bacteria and baker’s yeast) can actively kill each other and avoid mixing.  How do such antagonistic interactions impact the growth and survival of competing strains\, while being spatially advected by turbulent flows?  By using analytic arguments and numerical simulations of a continuum model\, we describe the dynamics of two antagonistic strains that are dispersed by both compressible and incompressible turbulent flows in two spatial dimensions.  A key parameter is the ratio of the fluid transport time to that of biological reproduction\, which determines the winning organism that ultimately takes over the whole population from an initial heterogeneous state\, a process known as fixation.  By quantifying the probability and mean time for fixation\, we discuss how turbulence raises the threshold for biological nucleation and antagonism suppresses flow-induced mixing by depleting the population at interfaces. We highlight the unusual biological consequences of the interplay of turbulent fluid flows with antagonistic population dynamics\, with potential implications for marine microbial ecology and origins of biological chirality.
URL:https://cmsa.fas.harvard.edu/event/colloquium_92225/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Colloquium
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Colloquium-9.22.2025-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250925T160000
DTEND;TZID=America/New_York:20250925T170000
DTSTAMP:20260430T180348
CREATED:20250826T192430Z
LAST-MODIFIED:20250919T142937Z
UID:10003762-1758816000-1758819600@cmsa.fas.harvard.edu
SUMMARY:Degeneration of Calabi-Yau 3-folds and 3-forms
DESCRIPTION:Differential Geometry and Physics Seminar  \nSpeaker: Teng Fei\, Rutgers \nTitle: Degeneration of Calabi-Yau 3-folds and 3-forms \nAbstract: We study the geometries associated to various 3-forms on a symplectic 6-manifold of different orbital types. As an application\, we demonstrate how this can be used to find Lagrangian foliations and other geometric structures of interest arising from certain degeneration of Calabi-Yau 3-folds. \n 
URL:https://cmsa.fas.harvard.edu/event/dgphys_92525/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Differential Geometry and Physics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/DG-Physics-Seminar-9.25.2025.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250926T120000
DTEND;TZID=America/New_York:20250926T130000
DTSTAMP:20260430T180348
CREATED:20250826T193028Z
LAST-MODIFIED:20250918T172135Z
UID:10003763-1758888000-1758891600@cmsa.fas.harvard.edu
SUMMARY:Sections of fibrations onto curves in characteristic p>0
DESCRIPTION:Member Seminar \nSpeaker: Iacopo Brivio \nTitle: Sections of fibrations onto curves in characteristic p>0 \nAbstract: This talk is based on joint work in progress with Ben Church. Using symplectic geometry\, Pieloch showed that every smooth fibration $f\colon X\to \mathbb{P}^1$ of complex projective varieties always admits a section. I will explain how this theorem can be recovered using techniques from Hodge theory and the Minimal Model Program. An advantage of this approach is that it allows for a positive characteristic generalization\, by replacing the Hodge theoretic input by a crystalline one. I will also give an example showing that Pieloch’s result can fail in characteristic p>0.
URL:https://cmsa.fas.harvard.edu/event/member-seminar-92625/
LOCATION:Common Room\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Member Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Member-Seminar-9.26.25-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250929T150000
DTEND;TZID=America/New_York:20250929T160000
DTSTAMP:20260430T180348
CREATED:20250924T181258Z
LAST-MODIFIED:20250924T183325Z
UID:10003795-1759158000-1759161600@cmsa.fas.harvard.edu
SUMMARY:Graph integrals on Kahler manifolds
DESCRIPTION:Quantum Field Theory and Physical Mathematics Seminar \nSpeaker: Minghao Wang\, Boston University \nTitle: Graph integrals on Kahler manifolds \nAbstract: I will talk about my recent work with Junrong Yan. We proved the convergence of Graph integrals on analytic Kahler manifolds in the sense of Cauchy principal values\, which are originally from holomorphic quantum field theories. In particular\, this allows us to construct geometric invariants of Calabi-Yau metrics. I will also talk about some potential applications of our results. References: arXiv:2507.09170\, arXiv:2401.08113
URL:https://cmsa.fas.harvard.edu/event/qft_92925/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Quantum Field Theory and Physical Mathematics
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-QFT-and-Physical-Mathematics-9.29.25-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250930T161500
DTEND;TZID=America/New_York:20250930T183000
DTSTAMP:20260430T180348
CREATED:20250829T204925Z
LAST-MODIFIED:20250929T175811Z
UID:10003775-1759248900-1759257000@cmsa.fas.harvard.edu
SUMMARY:Geometry and Quantum Theory Seminar
DESCRIPTION:Geometry and Quantum Theory Seminar \nSpeaker 1: Max Hubner\, CMSA \nTitle: On Topological Structures in String Theory \nAbstract: Geometric engineering constructions in string theory often realize QFTs relative to an extra-dimensional geometry. This perspective parallels the symmetry TFT construction where a QFT is presented relative to its extra-dimensional symmetry quiche. Unsurprisingly\, as we will discuss\, these constructions are related. Topological features of the extra-dimensional geometry map onto the symmetry TFT. We discuss examples and generalization beyond purely geometric constructions in string theory. \nSpeaker 2: Bowen Yang\, CMSA \nTitle: Bounded L theory \nAbstract: Bounded L-groups arise in the intersection of algebraic L-theory and large-scale geometry\, providing a framework for quadratic forms and automorphisms subject to uniform control conditions. These groups play a role in topology and surgery theory\, especially in contexts where one needs to measure obstructions not just algebraically but also geometrically\, with bounds on propagation or support. In this talk I will give a gentle introduction to the basic definitions\, explain how bounded L-groups differ from classical L-groups\, and outline an application to quantum many body invariants.
URL:https://cmsa.fas.harvard.edu/event/quantumgeo_93025/
LOCATION:Science Center 507\, 1 Oxford Street\, Cambridge\, 02138
CATEGORIES:Geometry and Quantum Theory Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Geometry-Quantum-Theory-9.30.25.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251001T140000
DTEND;TZID=America/New_York:20251001T150000
DTSTAMP:20260430T180348
CREATED:20250128T214901Z
LAST-MODIFIED:20251002T140605Z
UID:10003710-1759327200-1759330800@cmsa.fas.harvard.edu
SUMMARY:Tropicalized quantum field theory
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Michael Borinsky\, Perimeter Institute  \nTitle: Tropicalized quantum field theory \nAbstract: Quantum field theory (QFT) is one of the most accurate methods for making phenomenological predictions in physics\, but it has a significant drawback: obtaining concrete predictions from it is computationally very demanding. The standard perturbative approach expands an interacting QFT around a free QFT\, using Feynman diagrams. However\, the number of these diagrams grows superexponentially\, making the approach quickly infeasible. \nI will talk about arXiv:2508.14263\, which introduces an intermediate layer between free and interacting field theories: a tropicalized QFT. Often\, this tropicalized QFT can be solved exactly. The exact solution manifests as a non-linear recursion equation fulfilled by the expansion coefficients of the quantum effective action. Geometrically\, this recursion computes volumes of moduli spaces of metric graphs and is thereby analogous to Mirzakhani’s volume recursions on the moduli space of curves. Building on this exact solution\, an algorithm can be constructed that samples points from the moduli space of graphs approximately proportional to their perturbative contribution. Via a standard Monte Carlo approach we can evaluate the original QFT using this algorithm. Remarkably\, this algorithm requires only polynomial time and memory\, suggesting that perturbative quantum field theory computations actually lie in the polynomial-time complexity class\, while all known algorithms for evaluating individual Feynman integrals are at least exponential in time and memory. The (potential) capabilities of this approach are remarkable: For instance\, we can compute perturbative expansions of massive scalar D=3 phi^3 and D=4 phi^4 quantum field theories up to loop orders between 20 and 50 using a basic proof-of-concept implementation. These perturbative orders are completely inaccessible using a naive approach.
URL:https://cmsa.fas.harvard.edu/event/newtech_10125/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-10.1.2025.docx-1-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251002T160000
DTEND;TZID=America/New_York:20251002T170000
DTSTAMP:20260430T180348
CREATED:20250904T162108Z
LAST-MODIFIED:20250926T180606Z
UID:10003778-1759420800-1759424400@cmsa.fas.harvard.edu
SUMMARY:Special Kähler geometry and collapsing
DESCRIPTION:Differential Geometry and Physics Seminar  \nSpeaker: Valentino Tosatti\, NYU Courant Institute \nTitle: Special Kähler geometry and collapsing \nAbstract: Special Kähler geometry was first discovered in the context of N=2 supersymmetric 4D gauge theories\, and it also plays a prominent role in mirror symmetry. A key observation of Donagi-Witten and Freed is that the base of every algebraic integrable system admits a special Kähler metric\, while the total space admits a hyperkähler metric. In this talk I will consider compact hyperkähler manifolds with a an algebraic integrable system (i.e. a holomorphic Lagrangian torus fibration)\, and consider a family of hyperkähler metrics such that the volume of the torus fibers shrinks to zero. I will explain how the hyperkähler metrics must collapse to a special Kähler metric on the base (away from the discriminant locus)\, and what we can say about the metric completion of the limit. \n 
URL:https://cmsa.fas.harvard.edu/event/dgphys_10225/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Differential Geometry and Physics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/DG-Physics-Seminar-10.2.2025.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251003T120000
DTEND;TZID=America/New_York:20251003T130000
DTSTAMP:20260430T180348
CREATED:20250827T140756Z
LAST-MODIFIED:20250918T171806Z
UID:10003764-1759492800-1759496400@cmsa.fas.harvard.edu
SUMMARY:Local Donaldson-Scaduto conjecture
DESCRIPTION:Member Seminar \nSpeaker: Saman Habibi Esfahani \nTitle: Local Donaldson-Scaduto conjecture \nAbstract: This talk is based on joint works with Gora Bera and Yang Li. Motivated by collapsing Calabi-Yau 3-folds and G2-manifolds with Lefschetz K3 fibrations in the adiabatic setting\, Donaldson and Scaduto conjectured the existence and uniqueness of a special Lagrangian pair-of-pants in the Calabi-Yau 3-fold $ X \times \mathbb{C}$\, where $X$ is either a hyperkähler K3 surface (global version) or an A2-type ALE hyperkähler 4-manifold (local version). After a brief introduction to the subject\, we discuss the significance of this conjecture in the study of Calabi-Yau 3-folds and G2-manifolds\, and then prove the local version of the conjecture. \n 
URL:https://cmsa.fas.harvard.edu/event/member-seminar-10325/
LOCATION:Common Room\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Member Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Member-Seminar-10.3.25.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251006T090000
DTEND;TZID=America/New_York:20251010T170000
DTSTAMP:20260430T180348
CREATED:20250502T180256Z
LAST-MODIFIED:20260422T160144Z
UID:10003747-1759741200-1760115600@cmsa.fas.harvard.edu
SUMMARY:Mathematical foundations of AI
DESCRIPTION:Mathematical foundations of AI \nDate: October 6–10\, 2025 \nLocation: Harvard CMSA\, Room G10\, 20 Garden Street\, Cambridge MA & via Zoom \nArtificial intelligence (AI) has achieved unprecedented advances\, yet our theoretical understanding lags significantly behind. This gap poses a significant obstacle to improving AI’s safety and reliability. Since the classical tools of learning theory have proven insufficient for understanding AI\, researchers are now drawing insights from a vast array of fields—including functional analysis\, probability theory\, optimal transport\, optimization\, PDEs\, information theory\, geometry\, statistics\, electrical engineering\, and ergodic theory. Those interdisciplinary efforts are gradually shedding light on the underlying principles governing modern AI. This workshop centers around these mathematical and interdisciplinary developments. It will feature a series of talks from people in various subfields. Open problem and small-group sessions will help foster new connections and new research avenues. \n  \n Speakers \n\nJason Altschuler\, University of Pennsylvania\nGuy Bresler\, MIT\nSinho Chewi\, Yale University\nLenaic Chizat\, EPFL\nNabarun Deb\, University of Chicago\nEdgar Dobriban\, University of Pennsylvania\nAhmed El Alaoui\, Cornell University\nZhou Fan\, Yale University\nBoris Hanin\, Princeton University\nJason Klusowski\, Princeton University\nTengyu Ma\, Stanford University\nAlexander Rakhlin\, MIT\nYuting Wei\, University of Pennsylvania\nTijana Zrnic\, Stanford University\n\nOrganizer: Morgane Austern\, Harvard Statistics \n  \nSchedule \nMonday\, Oct. 6\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nYuting Wei\, U Penn \nTo Intrinsic Dimension and Beyond: Efficient Sampling in Diffusion Models \nThe denoising diffusion probabilistic model (DDPM) has become a cornerstone of generative AI. While sharp convergence guarantees have been established for DDPM\, the iteration complexity typically scales with the ambient data dimension of target distributions\, leading to overly conservative theory that fails to explain its practical efficiency. This has sparked recent efforts to understand how DDPM can achieve sampling speed-ups through automatic exploitation of intrinsic low dimensionality of data. This talk explores two key scenarios: (1) For a broad class of data distributions with intrinsic dimension k\, we prove that the iteration complexity of the DDPM scales nearly linearly with k\, which is optimal under the KL divergence metric; (2) For mixtures of Gaussian distributions with k components\, we show that DDPM learns the distribution with iteration complexity that grows only logarithmically in k. These results provide theoretical justification for the practical efficiency of diffusion models.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nJason Klusowski\, Princeton \nThe Value of Side Information in Unlabeled Data \nPractitioners often work in settings with limited labeled data and abundant unlabeled data. During training\, they may even have access to extra side information (some labeled\, some not) that won’t be available once the model is deployed. When can this side information actually improve performance? I’ll present a simple framework where a rich-view model that sees the extra features generates pseudo-labels on the large unlabeled data\, and a deployment model that only sees the standard features is trained on both real and pseudo-labels. The two are trained iteratively: each deployment model update calibrates the next round of pseudo-labels\, and those refined pseudo-labels in turn guide the deployment model. Our theory shows that side information helps precisely when the rich-view and deployment models make different kinds of errors. We formalize this with a decorrelation score that quantifies how independent those errors are; the more independent\, the greater the performance gains.\n\n\n11:3 0am–12:00 pm\nBreak\n\n\n12:00–1:00 pm\nGuy Bresler\, MIT \nGlobal Minimizers of Sigmoid Contrastive Loss \nThe meta-task of obtaining and aligning representations through contrastive pre-training is steadily gaining importance since its introduction in CLIP and ALIGN. In this paper we theoretically explain the advantages of synchronizing with trainable inverse temperature and bias under the sigmoid loss\, as implemented in the recent SigLIP models of Google DeepMind. Temperature and bias can drive the loss function to zero for a rich class of configurations that we call (m\,b)-Constellations. (m\,b)-Constellations are a novel combinatorial object related to spherical codes and are parametrized by a margin m and relative bias b. We use our characterization of constellations to theoretically justify the success of SigLIP on retrieval\, to explain the modality gap present in SigLIP\, and to identify the necessary dimension for producing high-quality representations. We also propose a reparameterization of the sigmoid loss with explicit relative bias\, which appears to improve training dynamics. Joint work with Kiril Bangachev\, Iliyas Noman\, and Yury Polyanskiy.\n\n\n\n  \nTuesday\, Oct. 7\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nLénaïc Chizat\, EPFL \nThe Hidden Width of Deep ResNets \nWe present a mathematical framework to analyze the training dynamics of deep ResNets that rigorously captures practical architectures (including Transformers) trained from standard random initializations. Our approach combines stochastic approximation of ODEs with propagation-of-chaos arguments. It yields three main insights:\n– Depth begets width: infinite-depth ResNets of any hidden width behave throughout training as if they were infinitely wide;\n– Unified phase diagram: the phase diagram of Transformers mirrors that of two-layer perceptrons\, once the appropriate substitutions are made;\n– Optimal shape scaling: for a given parameter budget P\, a Transformer with optimal shape converges to its limiting dynamics at rate P^{-1/6}.\nThis is based on https://arxiv.org/abs/2509.10167\n\n\n10:00–10:30 am\nBreak \n \n\n\n10:30–11:30 am\nBoris Hanin\, Princeton \nKernel Learning on Manifolds \nThis talk concerns the L_2 risk of minimum norm interpolation with n samples in the RKHS of a kernel K. Unlike most prior work in this space our kernels will be defined on any close d-dimensional Riemannian manifold\, and we require only that the kernels are trace class and elliptic. With these assumptions we get nearly sharp L_2 risk bounds with high probability over the data. Like prior work on round spheres our results essentially say that the number of samples n\, the dimension of the manifold\, and some details of the kernel determine a natural spectral cutoff \lambda(n\,d\,K) and that minimal norm interpolation essentially learns exactly the projection of the data generating process onto the eigenfunctions of the Laplacian with frequency at most \lambda(n\,d\,K). Joint work with Mengxuan Yang.\n\n\n11:30–12:00\nBreak\n\n\n12:00–1:00\nZhou Fan\, Yale \nDynamical mean-field analysis of adaptive Langevin diffusions \nIn many applications of statistical estimation via sampling\, one may wish to sample from a high-dimensional target distribution that is adaptively evolving to the samples already seen. We study an example of such dynamics\, given by a Langevin diffusion for posterior sampling in a Bayesian linear regression model with i.i.d. regression design\, whose prior continuously adapts to the Langevin trajectory via a maximum marginal-likelihood scheme. Using techniques of dynamical mean-field theory (DMFT)\, we provide a precise characterization of a high-dimensional asymptotic limit for the joint evolution of the prior parameter and law of the Langevin sample. We then carry out an analysis of the equations that describe this DMFT limit\, under conditions of approximate time-translation-invariance which include\, in particular\, settings where the posterior law satisfies a log-Sobolev inequality. In such settings\, we show that this adaptive Langevin trajectory converges on a dimension-independent time horizon to an equilibrium state that is characterized by a system of replica-symmetric fixed-point equations\, and the associated prior parameter converges to a critical point of a replica-symmetric limit for the model free energy. We explore the nature of the free energy landscape and its critical points in a few simple examples\, where such critical points may or may not be unique.\n\n\n\n  \nWednesday\, Oct. 8\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nJason Altschuler\, U Penn \nNegative Stepsizes Make Gradient-Descent-Ascent Converge \nSolving min-max problems is a central question in optimization\, games\, learning\, and controls. Arguably the most natural algorithm is Gradient-Descent-Ascent (GDA)\, however since the 1970s\, conventional wisdom has argued that it fails to converge even on simple problems. This failure spurred the extensive literature on modifying GDA with extragradients\, optimism\, momentum\, anchoring\, etc. In contrast\, we show that GDA converges in its original form by simply using a judicious choice of stepsizes. The key innovation is the proposal of unconventional stepsize schedules that are time-varying\, asymmetric\, and (most surprisingly) periodically negative. We show that all three properties are necessary for convergence\, and that altogether this enables GDA to converge on the classical counterexamples (e.g.\, unconstrained convex-concave problems). The core intuition is that although negative stepsizes make backward progress\, they de-synchronize the min/max variables (overcoming the cycling issue of GDA) and lead to a slingshot phenomenon in which the forward progress in the other iterations is overwhelmingly larger. This results in fast overall convergence. Geometrically\, the slingshot dynamics leverage the non-reversibility of gradient flow: positive/negative steps cancel to first order\, yielding a second-order net movement in a new direction that leads to convergence and is otherwise impossible for GDA to move in. Joint work with Henry Shugart.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nNabarun Deb\, U Chicago \nGenerative Modeling via Parabolic Monge-Ampère PDEs \nWe introduce a novel generative modeling framework based on a discretized parabolic Monge-Ampère PDE\, which emerges as a continuous limit of the Sinkhorn algorithm commonly used in optimal transport. Our method performs iterative refinement in the space of Brenier maps using a mirror gradient descent step. We establish theoretical guarantees for generative modeling through the lens of no-regret analysis\, demonstrating that the iterates converge to the optimal Brenier map under a variety of step-size schedules. As a technical contribution\, we derive a new Evolution Variational Inequality tailored to the parabolic Monge-Ampère PDE\, connecting geometry\, transportation cost\, and regret. Our framework accommodates non-log-concave target distributions\, constructs an optimal sampling process via the Brenier map\, and integrates favorable learning techniques from generative adversarial networks and score-based diffusion models.\n\n\n11:30–12:00\nBreak\n\n\n12:00–1:00\nSinho Chewi\, Yale \nDiscretization and distribution learning in diffusion models \nFirst\, I will review some literature on discretization of diffusion models\, focusing on the use of randomized midpoints for deterministic vs. stochastic samplers. Then\, I will argue that such sampling guarantees reduce distribution learning\, in the form of learning to generate a sample\, to score matching. To complement this result\, we reduce other forms of distribution learning (parameter estimation and density estimation) to score matching as well. This leads to new consequences for diffusion models\, such as asymptotic efficiency of a DDPM-based parameter estimator and algorithms for Gaussian mixture density estimation\, as well as to a general approach for establishing cryptographic hardness results for score estimation.\n\n\n\n  \nThursday\, Oct. 9\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nAhmed El Alaoui\, Cornell \nHow abundant are good interpolators? \nWe consider classifying labelled data in the interpolation regime where there exist linear classifiers (with possibly negative margin) correctly classifying all points in the dataset. Under the logistic model with gaussian features\, we derive the large deviation rate function of the event that an interpolator chosen uniformly at random achieves a given generalization error. This describes the proportion of interpolators having any desired performance. We remark that in a wide regime of parameters\, the vast majority of interpolators have inferior performance than the one found via a simple linear programming procedure\, showing that the latter algorithm produces an atypically good classifier.\nThis is based on joint work with August Chen.\n\n\n10:00–10:30 am\nbreak\n\n\n10:30–11:30 am\nTengyu Ma\, Stanford \nSelf-play LLM Theorem Provers with Iterative Conjecturing and Proving \nI will discuss some works on using RL for theorem proving\, especially in the possible future regime where we ran out of high-quality training data. To keep improving the models with limited data\, we draw inspiration from mathematicians\, who continuously develop new results\, partly by proposing novel conjectures or exercises (which are often variants of known results) and attempting to solve them. We design the Self-play Theorem Prover (STP) that simultaneously takes on two roles\, conjecturer and prover\, each providing training signals to the other. The model achieves state-of-the-art performance among whole-proof generation methods on miniF2F-test (65.0%\, pass@3200)\, Proofnet-test (23.9%\, pass@3200) and PutnamBench (8/644\, pass@3200). \n \n\n\n11:30–12:00\nbreak\n\n\n12:00–1:00\nEdgar Dobriban\, U Penn \nLeveraging synthetic data in statistical inference \nThe rapid proliferation of high-quality synthetic data — generated by advanced AI models or collected as auxiliary data from related tasks — presents both opportunities and challenges for statistical inference. This paper introduces a GEneral Synthetic-Powered Inference (GESPI) framework that wraps around any statistical inference procedure to safely enhance sample efficiency by combining synthetic and real data. Our framework leverages high-quality synthetic data to boost statistical power\, yet adaptively defaults to the standard inference method using only real data when synthetic data is of low quality. The error of our method remains below a user-specified bound without any distributional assumptions on the synthetic data\, and decreases as the quality of the synthetic data improves. This flexibility enables seamless integration with conformal prediction\, risk control\, hypothesis testing\, and multiple testing procedures\, all without modifying the base inference method. We demonstrate the benefits of our method on challenging tasks with limited labeled data\, including AlphaFold protein structure prediction\, and comparing large reasoning models on complex math problems.\n\n\n\n  \nFriday\, Oct. 10\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nTijana Zrnic\, Stanford \nProbably Approximately Correct Labels \nObtaining high-quality labeled datasets is often costly\, requiring either extensive human annotation or expensive experiments. We propose a method that supplements such “expert” labels with AI predictions from pre-trained models to construct labeled datasets more cost-effectively. Our approach results in probably approximately correct labels: with high probability\, the overall labeling error is small. This solution enables rigorous yet efficient dataset curation using modern AI models. We demonstrate the benefits of the methodology through text annotation with large language models\, image labeling with pre-trained vision models\, and protein folding analysis with AlphaFold. This is joint work with Emmanuel Candes and Andrew Ilyas.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nAlexander Rakhlin\, MIT \nElements of Interactive Decision Making \nMachine learning methods are increasingly deployed in interactive environments\, ranging from dynamic treatment strategies in medicine to fine-tuning of LLMs using reinforcement learning. In these settings\, the learning agent interacts with the environment to collect data and necessarily faces an exploration-exploitation dilemma. We present a general framework for interactive decision making that subsumes multi-armed bandits\, contextual bandits\, structured bandits\, and reinforcement learning. We focus on both the statistical aspect of learning—aiming to develop a tight characterization of sample complexity in terms of properties of the class of models—and on the basic algorithmic primitives.\n\n\n\n  \n  \n\n  \n 
URL:https://cmsa.fas.harvard.edu/event/mathai/
LOCATION:CMSA 20 Garden Street Cambridge\, Massachusetts 02138 United States
CATEGORIES:Workshop
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/MathAI.5.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251006T150000
DTEND;TZID=America/New_York:20251006T160000
DTSTAMP:20260430T180348
CREATED:20250924T182709Z
LAST-MODIFIED:20251006T144221Z
UID:10003796-1759762800-1759766400@cmsa.fas.harvard.edu
SUMMARY:Non-perturbative aspects of self-dual gauge theory
DESCRIPTION:Quantum Field Theory and Physical Mathematics Seminar \nSpeaker: Kevin Costello (Perimeter Institute)\n\nTitle: Non-perturbative aspects of self-dual gauge theory\n\nAbstract: Self-dual gauge theory is conformal in perturbation theory\, but has a non-trivial beta-function when instanton effects are included. I will give two computations of this beta-function\, one based on the Grothendieck-Riemann-Roch formula and one using holography in the topological string.   This leads to two new ways to compute the standard QCD beta-function at one loop\, without using Feynman diagrams.  If time permits\, I will also discuss how instantons effect scattering amplitudes.\n\n 
URL:https://cmsa.fas.harvard.edu/event/qft_100625/
LOCATION:Virtual
CATEGORIES:Quantum Field Theory and Physical Mathematics
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-QFT-and-Physical-Mathematics-10.6.25-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251006T163000
DTEND;TZID=America/New_York:20251006T173000
DTSTAMP:20260430T180348
CREATED:20250914T165359Z
LAST-MODIFIED:20250914T165941Z
UID:10003794-1759768200-1759771800@cmsa.fas.harvard.edu
SUMMARY:Geometry of dimer models
DESCRIPTION:Colloquium \nSpeaker: Alexei Borodin\, MIT \nTitle: Geometry of dimer models \nAbstract: Random dimer coverings of large planar graphs are known to exhibit unusual and visually apparent asymptotic phenomena that include formation of frozen regions and various phases in the unfrozen ones. For a specific family of subgraphs of the (periodically weighted) square lattice known as the Aztec diamonds\, the asymptotic behavior of dimers admits a precise description in terms of geometry of underlying Riemann surfaces. The goal of the talk is to explain how the surface structure manifests itself through the statistics of dimers. Based on joint works with T. Berggren and M. Duits. \n 
URL:https://cmsa.fas.harvard.edu/event/colloquium_10625/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Colloquium
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Colloquium-10.6.2025.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251007T161500
DTEND;TZID=America/New_York:20251007T183000
DTSTAMP:20260430T180348
CREATED:20251001T183038Z
LAST-MODIFIED:20251007T132737Z
UID:10003802-1759853700-1759861800@cmsa.fas.harvard.edu
SUMMARY:A Classifying Space for Phases of Matrix Product States
DESCRIPTION:Geometry and Quantum Theory Seminar \nSpeakers: Daniel Spiegel\, Harvard Math \nTitle: A Classifying Space for Phases of Matrix Product States \nAbstract: Alexei Kitaev has conjectured that there should be a loop spectrum consisting of spaces of gapped invertible quantum spin systems\, indexed by spatial dimension d of the lattice. Motivated by Kitaev’s conjecture\, I will detail a concrete construction of a topological space B consisting of translation invariant injective matrix product states (MPS) of all physical and bond dimensions\, which plays the role Kitaev’s space in dimension d = 1. Having such a space is a useful tool in the discussion of parametrized phases of MPS; in fact it allows us to define a parametrized phase as a homotopy class of maps into B. The space B is constructed as the quotient of a contractible space E of MPS tensors modulo gauge transformations. The projection map from E to B is a quasifibration\, from which we can compute the homotopy groups of the classifying space B by a long exact sequence. In particular\, B has the weak homotopy type K(Z\, 2) x K(Z\, 3)\, shedding light on Kitaev’s conjecture in the context of MPS. \nDaniel Spiegel will speak for 60 minutes. \nSunghyuk Park  (CMSA) will also speak for 15 minutes
URL:https://cmsa.fas.harvard.edu/event/quantumgeo_10725/
LOCATION:Science Center 507\, 1 Oxford Street\, Cambridge\, 02138
CATEGORIES:Geometry and Quantum Theory Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Geometry-Quantum-Theory-10.7.25-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251008T140000
DTEND;TZID=America/New_York:20251008T150000
DTSTAMP:20260430T180348
CREATED:20250930T181425Z
LAST-MODIFIED:20251009T195959Z
UID:10003801-1759932000-1759935600@cmsa.fas.harvard.edu
SUMMARY:Understanding Optimization in Deep Learning with Central Flows
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Alex Damian\, Harvard \nTitle: Understanding Optimization in Deep Learning with Central Flows \nAbstract: Traditional theories of optimization cannot describe the dynamics of optimization in deep learning\, even in the simple setting of deterministic training. The challenge is that optimizers typically operate in a complex\, oscillatory regime called the “edge of stability.” In this paper\, we develop theory that can describe the dynamics of optimization in this regime. Our key insight is that while the *exact* trajectory of an oscillatory optimizer may be challenging to analyze\, the *time-averaged* (i.e. smoothed) trajectory is often much more tractable. To analyze an optimizer\, we derive a differential equation called a “central flow” that characterizes this time-averaged trajectory. We empirically show that these central flows can predict long-term optimization trajectories for generic neural networks with a high degree of numerical accuracy. By interpreting these central flows\, we are able to understand how gradient descent makes progress even as the loss sometimes goes up; how adaptive optimizers “adapt” to the local loss landscape; and how adaptive optimizers implicitly navigate towards regions where they can take larger steps. Our results suggest that central flows can be a valuable theoretical tool for reasoning about optimization in deep learning. \n 
URL:https://cmsa.fas.harvard.edu/event/newtech_10825/
LOCATION:Hybrid – G10
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-10.8.2025-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251009T140000
DTEND;TZID=America/New_York:20251009T150000
DTSTAMP:20260430T180348
CREATED:20250911T184457Z
LAST-MODIFIED:20251002T182058Z
UID:10003789-1760018400-1760022000@cmsa.fas.harvard.edu
SUMMARY:Profinite tensor powers
DESCRIPTION:Algebra Seminar \nSpeaker: David Treumann (Boston College) \nTitle: Profinite tensor powers \nAbstract: I’ll discuss the problem of defining a tensor product of profinitely many copies of a vector space V\, and propose a definition $\bigotimes_X^{mcc} V$ in the special situation that (1) V is finite-dimensional over F_2\, and (2) the profinite X indexing the tensor factors is acted on with finitely many orbits by a pro-2-group. The “mcc” on the tensor sign stands for “magnetized and conditionally convergent.” A variant construction makes sense when V is a bimodule over a semisimple F_2-algebra\, and the index set X has the profinite version of a cyclic order. The definition organizes some computations in Heegard Floer homology: it can be pitched as a computation of the HF of some pro-3-manifolds\, though we do not know how to define such a thing. This is joint work with CM Michael Wong. \n 
URL:https://cmsa.fas.harvard.edu/event/algebra-seminar_10925/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Algebra Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Algebra-Seminar-10.9.25-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251009T160000
DTEND;TZID=America/New_York:20251009T170000
DTSTAMP:20260430T180348
CREATED:20250904T162516Z
LAST-MODIFIED:20251010T130239Z
UID:10003779-1760025600-1760029200@cmsa.fas.harvard.edu
SUMMARY:Symmetries and Moduli Spaces: Baby Steps beyond Calabi-Yau
DESCRIPTION:Differential Geometry and Physics Seminar  \nSpeaker: Xingyang Yu\, Virginia Tech \nTitle: Symmetries and Moduli Spaces: Baby Steps beyond Calabi-Yau \nAbstract: I will explore the interplay between symmetries and moduli spaces in string compactifications\, starting from the familiar Calabi–Yau case and then taking some baby steps toward more general settings. A classical benchmark is the line bundle over Calabi–Yau complex structure moduli space\, whose physical counterpart corresponds to the Berry phase of the spectral flow operator in worldsheet SCFT. I will review this story and then discuss how it begins to change in c=1 theories with worldsheet anomalies\, and in G_2 and Spin(7) compactifications where U(1)_R symmetry is absent. The goal is not a finished framework\, but to highlight how anomalies and non-invertible symmetries may enter the picture and to raise open questions about what kinds of structures might live over moduli spaces beyond Calabi–Yau.
URL:https://cmsa.fas.harvard.edu/event/dgphys_10925/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Differential Geometry and Physics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/DG-Physics-Seminar-9.9.2025-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251010T120000
DTEND;TZID=America/New_York:20251010T130000
DTSTAMP:20260430T180348
CREATED:20250827T140826Z
LAST-MODIFIED:20251006T190823Z
UID:10003765-1760097600-1760101200@cmsa.fas.harvard.edu
SUMMARY:The Rozansky-Witten field theory in the functorial TQFT formalism
DESCRIPTION:Member Seminar \nSpeaker: Lorenzo Riva \nTitle: The Rozansky-Witten field theory in the functorial TQFT formalism \nAbstract: This will be a broad talk about the topic of my PhD thesis. We will discuss a particular example of a 3D field theory from physics called Rozansky-Witten which is interesting from both a physical and a mathematical point of view: its is connected with mirror symmetry\, the A- and B-models\, Calabi-Yau geometry\, and the partition functions give finite-type invariants of 3-manifolds. In the rest of the talk we will try to formalize this field theory as a functor out of a certain cobordism 3-category (emphasis on “try”).
URL:https://cmsa.fas.harvard.edu/event/member-seminar-101025/
LOCATION:Common Room\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Member Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Member-Seminar-10.10.25-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251014T161500
DTEND;TZID=America/New_York:20251014T183000
DTSTAMP:20260430T180348
CREATED:20251001T183159Z
LAST-MODIFIED:20251014T154228Z
UID:10003803-1760458500-1760466600@cmsa.fas.harvard.edu
SUMMARY:Geometry and Quantum Theory Seminar
DESCRIPTION:Geometry and Quantum Theory Seminar \nSpeaker: Dylan Galt\, Harvard \n(60 minute talk) \nTitle: What is a “nonlinear” near-symplectic form? \nAbstract: In this talk\, I will explain how one might understand this question and why a possible answer can be found in the geometry of coassociative 4-folds\, a special class of minimal submanifolds discovered by Harvey and Lawson. \n  \nSpeaker: Keyou Zeng\, CMSA \n(30 minute talk) \nTitle: Cohomology of configuration space of points \nAbstract: Configuration space of points is an interesting and important subject in mathematics and physics. I’ll review some classical results computing cohomology of configuration space of points. I’ll also introduce some recent progress in computing sheaf cohomology of configuration space of affine space. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/quantumgeo_101425/
LOCATION:Science Center 507\, 1 Oxford Street\, Cambridge\, 02138
CATEGORIES:Geometry and Quantum Theory Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Geometry-Quantum-Theory-10.14.25-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251015T170000
DTEND;TZID=America/New_York:20251015T180000
DTSTAMP:20260430T180348
CREATED:20250311T134919Z
LAST-MODIFIED:20251021T134849Z
UID:10003657-1760547600-1760551200@cmsa.fas.harvard.edu
SUMMARY:Millennium Prize Problems Lecture - Sourav Chatterjee: Yang-Mills and the foundations of quantum field theory
DESCRIPTION:Millennium Prize Problems Lecture  \nDate: October 15\, 2025 \nTime: 5:00–6:00 pm \nLocation: Harvard Science Center Hall D\, 1 Oxford St.\, Cambridge MA \nSpeaker: Sourav Chatterjee\, Stanford University \nTitle: Yang-Mills and the foundations of quantum field theory \nAbstract: Yang-Mills theories are the building blocks of the Standard Model of quantum mechanics\, which is the best available model for our universe at the quantum scale. Yet\, these theories do not have a rigorous mathematical foundation. Physical calculations are based on perturbation theory\, but there are various phenomena that are believed to be out of the reach of perturbative arguments. Building a mathematical foundation is\, therefore\, important even from the physics point of view. A program with this objective\, known as “constructive field theory”\, was initiated in the 1960s. In spite of many successes\, the program has not reached its original goal. Completing this program is the Clay Millennium Prize problem of Yang-Mills existence and mass gap. I will give a general introduction to the main questions\, and an overview of exciting recent progress that has rejuvenated the quest for a solution in the last ten years. \nRead more about the Yang-Mills Existence and Mass Gap at the Clay Math website. \nOrganizers: Martin Bridson\, Clay Mathematics Institute | Dan Freed\, Harvard University and CMSA | Mike Hopkins\, Harvard University \n\n                   \n\nMillennium Prize Problems Lecture Series
URL:https://cmsa.fas.harvard.edu/event/clay_101425/
LOCATION:Harvard Science Center Hall D\, 1 Oxford Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Millennium Prize Problems Lecture,Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Chatterjee_web_ad.2-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251016T140000
DTEND;TZID=America/New_York:20251016T150000
DTSTAMP:20260430T180348
CREATED:20250911T184527Z
LAST-MODIFIED:20251006T162709Z
UID:10003790-1760623200-1760626800@cmsa.fas.harvard.edu
SUMMARY:Lech's inequality and stability of local rings
DESCRIPTION:Algebra Seminar \nSpeaker: Linquan Ma (Purdue University) \nTitle: Lech’s inequality and stability of local rings \nAbstract: We explore Lech’s inequality relating the colength and multiplicity of m-primary ideals in a Noetherian local ring (R\,m). We introduce a natural invariant that measures the sharpness of Lech’s inequality and show its connections with singularities of asymptotically semistable varieties and singularities arising from the MMP. We compute this invariant in various examples. This is joint work with Ilya Smirnov. \n 
URL:https://cmsa.fas.harvard.edu/event/algebra-seminar_101625/
LOCATION:Virtual
CATEGORIES:Algebra Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Algebra-Seminar-10.16.25.docx-1-1-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251016T160000
DTEND;TZID=America/New_York:20251016T170000
DTSTAMP:20260430T180348
CREATED:20250904T162550Z
LAST-MODIFIED:20251014T150012Z
UID:10003780-1760630400-1760634000@cmsa.fas.harvard.edu
SUMMARY:Differential Geometry and Physics Seminar
DESCRIPTION:Differential Geometry and Physics Seminar  \nSpeaker: Andy Neitzke\, Yale \nTitle: Abelianization of tau functions \nAbstract: The symplectic and hyperkahler geometry of moduli spaces of flat connections over Riemann surfaces is in a sense quantized by the theory of isomonodromic tau functions. These functions in turn arise as partition functions in the conformal field theory of twisted free fermions. I will describe a new scheme for computing these tau functions via abelianization\, and what it produces in one simple example\, related to the Painleve I equation. This scheme is joint work with Qianyu Hao. The talk is intended to be self-contained (you don’t have to know in advance what a tau function or a conformal field theory is). \n  \n 
URL:https://cmsa.fas.harvard.edu/event/dgphys_101625/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Differential Geometry and Physics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/DG-Physics-Seminar-10.16.2025-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251017T120000
DTEND;TZID=America/New_York:20251017T130000
DTSTAMP:20260430T180348
CREATED:20250827T141359Z
LAST-MODIFIED:20251010T180544Z
UID:10003766-1760702400-1760706000@cmsa.fas.harvard.edu
SUMMARY:DMFT\, Two Point Correlations of Resolvents\, and Applications to Machine Learning Theory
DESCRIPTION:Member Seminar \nSpeaker: Blake Bordelon \nTitle: DMFT\, Two Point Correlations of Resolvents\, and Applications to Machine Learning Theory \nAbstract: Machine learning algorithms evolve the parameters of a model in a high dimensional and disordered loss landscape. To characterize the effects of random initialization of model parameters\, randomly sampled training data\, and the effect of SGD noise\, it often is useful to invoke ideas from random matrix theory and the physics of disordered systems. In this seminar\, I describe a general idea\, known as dynamical mean field theory (DMFT) which describes the evolution of a disordered dynamical system in infinite dimensions. I will briefly describe simple examples of interest to theoretical neuroscientists and machine learning theorists. For linear dynamical systems\, I will show that this method characterizes the typical case trajectory in terms of two point correlations of resolvent matrices evaluated at different frequencies. This bispectral object can account for puzzling effects such as late time divergence of gradient descent at the interpolation threshold (when parameters = dataset size) despite the Jacobian of the dynamics having real and non-positive eigenvalues. I will then describe a novel two point correlation result for general free products of the form M = O B O^T A for O sampled from the Haar measure. I will use this result to characterize the exact asymptotics of the performance of a linear transformer trained to perform in-context linear regression on “generic” (randomly rotated) covariance matrices.
URL:https://cmsa.fas.harvard.edu/event/member-seminar-101725/
LOCATION:Common Room\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Member Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Member-Seminar-10.17.25-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251017T130000
DTEND;TZID=America/New_York:20251017T160000
DTSTAMP:20260430T180348
CREATED:20250930T134721Z
LAST-MODIFIED:20251014T133421Z
UID:10003800-1760706000-1760716800@cmsa.fas.harvard.edu
SUMMARY:Freedman Seminar: Michael Freedman\, CMSA & Bowen Yang\, CMSA
DESCRIPTION:Freedman Seminar \nSpeaker: Michael Freedman\, Harvard CMSA \nTitle: Sullivan’s work on Lipschitz structures \nAbstract: I’ll begin with an elementary\, but now little known\, piece of PL topology: engulfing. John Stalling used it to give an alternative proof of the high dimensional Poincare conjecture. Then I’ll explain Dennis Sullivan’s enhancement of Kirby’s torus trick (which relies on engulfing.) I’ll note an open question regarding Lipschitz structures on 4-manifolds. \n  \nSpeaker: Bowen Yang\, CMSA \nTitle: Quantum Cellular Automata and Algebraic L-Theory \nAbstract: Quantum cellular automata (QCAs) are models of reversible quantum dynamics that preserve locality; they can be thought of as quantum analogues of classical cellular automata\, but with much richer structure. I will describe a classification of the Clifford subclass of QCAs using methods from algebraic L-theory. The main result identifies the group of Clifford QCAs\, up to natural equivalences\, with L-theory homology of the underlying space. This gives a conceptual explanation of previously observed periodic patterns in lattice models and extends the picture to more general spaces. I will outline the ideas behind the construction and indicate how the framework connects topology\, operator algebras\, and quantum information. If time permits\, I will also comment on what is known — and unknown — about the general (non-Clifford) case.
URL:https://cmsa.fas.harvard.edu/event/freedman_101725/
LOCATION:Virtual
CATEGORIES:Freedman Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Freedman-Seminar-10.17.25.docx-1-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251020T150000
DTEND;TZID=America/New_York:20251020T160000
DTSTAMP:20260430T180348
CREATED:20250924T183004Z
LAST-MODIFIED:20251016T160042Z
UID:10003797-1760972400-1760976000@cmsa.fas.harvard.edu
SUMMARY:Categorical 't Hooft expansion and Chiral Algebras
DESCRIPTION:Quantum Field Theory and Physical Mathematics Seminar \nSpeaker: Adrian López-Raven\, Perimeter \nTitle: Categorical ‘t Hooft expansion and Chiral Algebras \nAbstract: In https://arxiv.org/abs/2411.00760\, we show how holographic dual B-model backgrounds can be systematically derived from the ‘t Hooft expansion of specific families of chiral algebras. The resulting holographic dual backgrounds are typically non-commutative and appear to be novel. In this talk I’ll review certain aspects of our construction. In particular\, we’ll review how to build a category of D-branes for the String Theory dual\, starting from the planar limit of the chiral algebra. Given its generality\, I’ll emphasize the potential utility of the construction in the study of weak coupling holography for general theories with a large N limit. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/qft_102025/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Quantum Field Theory and Physical Mathematics
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-QFT-and-Physical-Mathematics-10.20.25-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251020T163000
DTEND;TZID=America/New_York:20251020T173000
DTSTAMP:20260430T180348
CREATED:20250912T180641Z
LAST-MODIFIED:20251030T151928Z
UID:10003752-1760977800-1760981400@cmsa.fas.harvard.edu
SUMMARY:Math Science Lectures in Honor of Raoul Bott | Dennis Gaitsgory\, MPIM | Function-theoretic implications of geometric Langlands
DESCRIPTION:Two talks on Function-theoretic implications of geometric Langlands\nDates: October 20 & 21\, 2025 \nTime: 4:30–5:30 pm \nLocation: Science Center Lecture Hall A and via Webinar \n  \nSpeaker: Dennis Gaitsgory\, Max Planck Institute for Mathematics \nAbstract: The recently established geometric Langlands equivalence implies an explicit description of the space of (unramified) automorphic functions in terms of Langlands parameters. In these lectures\, we will derive these description and explain how far we can go with it in order to deduce some expected properties of automorphic functions\, e.g.\, Ramanujan and Arthur multiplicity conjectures. This is joint work with Vincent Lafforgue and Sam Raskin. \n  \nLecture 1: Monday\, October 20\, 2025\nFrom geometric to classical Langlands \n \n  \nLecture 2: Tuesday\, October 21\, 2025\nAnalytic properties of automorphic functions as seen from algebraic geometry \n \n  \n\nHarvard Mathematics Professor Raoul Bott (1923 – 2005)\, was a Hungarian-American mathematician known for numerous foundational contributions to geometry in its broad sense. He is best known for his Bott periodicity theorem\, the Morse–Bott functions which he used in this context\, and the Borel–Bott–Weil theorem.
URL:https://cmsa.fas.harvard.edu/event/mathscibott_2025/
LOCATION:Harvard Science Center\, 1 Oxford Street\, Cambridge\, MA\, 02138
CATEGORIES:Event,Math Science Lectures in Honor of Raoul Bott,Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Bott-Lecture_2025.v2-scaled.jpg
END:VEVENT
END:VCALENDAR