BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CMSA - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:CMSA
X-ORIGINAL-URL:https://cmsa.fas.harvard.edu
X-WR-CALDESC:Events for CMSA
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20270314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20271107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20270314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20271107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250908T090000
DTEND;TZID=America/New_York:20250910T170000
DTSTAMP:20260507T001715
CREATED:20250502T174228Z
LAST-MODIFIED:20260422T141418Z
UID:10003660-1757322000-1757523600@cmsa.fas.harvard.edu
SUMMARY:Math and Machine Learning Reunion Workshop
DESCRIPTION:Math and Machine Learning Reunion Workshop \nDates: September 8–10\, 2025 \nLocation: Harvard CMSA\, Room G10\, 20 Garden Street\, Cambridge MA \nMachine learning and AI are increasingly important tools in all fields of research. In the fall of 2024\, the CMSA Mathematics and Machine Learning Program hosted 70 mathematicians and machine learning experts\, ranging from beginners to established leaders in their field\, to explore ML as a research tool for mathematicians\, and mathematical approaches to understanding ML. More than 20 papers came out of projects started and developed during the program. The MML Reunion workshop will be an opportunity for the participants to share their results\, review subsequent developments\, and develop directions for future research. \nInvited Speakers \n\nAngelica Babei\, Howard University\nGergely Bérczi\, Aarhus University\nJoanna Bieri\, University of Redlands\nGiorgi Butbaia\, University of New Hampshire\nRandy Davila\, RelationalAI\, Rice University\nAlyson Deines\, IDA/CCR La Jolla\nSergei Gukov\, Caltech\nYang-Hui He\, University of Oxford\nMark Hughes\, Brigham Young University\nKyu-Hwan Lee\, University of Connecticut\nEric Mjolsness\, UC Irvine\nMaria Prat Colomer\, Brown University\nSébastien Racanière\, Google DeepMind\nEric Ramos\, Stevens Institute of Technology\nTamara Veenstra\, IDA-CCR La Jolla\n\nOrganizer:Michael Douglas\, CMSA \n\nSchedule \nMonday Sep. 8\, 2025 \n\n\n\n9:00–9:30 am\nMorning refreshments\n\n\n9:30–9:45 am\nIntroductions\n\n\n9:45–10:45 am\nAngelica Babei\, Howard University\nTitle: Predicting Euler factors of elliptic curves\nAbstract: Two non-isogenous elliptic curves will have distinct traces of Frobenius at a large enough prime\, and a finite set of $a_p(E)$ values determines all others. However\, even when enough $a_p(E)$ values are provided to uniquely identify the isogeny class\, no efficient algorithm is known for determining the remaining $a_p(E)$ values from this finite set. Preliminary results show that ML models can learn to predict the next trace of Frobenius with a surprising degree of accuracy from relatively few nearby entries. We investigate some possible reasons for this performance. Based on joint work with François Charton\, Edgar Costa\, Xiaoyu Huang\, Kyu-Hwan Lee\, David Lowry-Duda\, Ashvni Narayanan\, and Alexey Pozdnyakov.\n\n\n10:45–11:00 am\nBreak\n\n\n11:00 am–12:00 pm\nKyu-Hwan Lee\, University of Connecticut\nTitle: Machine learning mutation-acyclicity of quivers\n\n\n12:00–1:30 pm\nLunch\n\n\n1:30–2:30 pm\nGergely Bérczi\, Aarhus University\nTitle: Diffusion Models for Sphere Packings\n\n\n2:30–2:45 pm\nBreak\n\n\n2:45–3:45 pm\nRandy Davila\, RelationalAI\, Rice University\nTitle: Recent Developments in Automated Conjecturing\nAbstract: The dream of a machine capable of generating deep mathematical insight has inspired decades of research—from Fajtlowicz’s Graffiti program in graph theory and chemistry to DeepMind’s neural breakthroughs in knot theory. In this talk\, we briefly trace the evolution of automated conjecturing systems and present recent advances that deepen our understanding of what it means for machines to conjecture—a pursuit long embodied by our system\, TxGraffiti. Building on this legacy\, we introduce a new framework that integrates optimization\, enumeration\, and convex geometric methods with creative heuristics and symbolic translation. This extended system produces not only conjectured inequalities\, but also necessary and sufficient condition statements\, which can then be automatically ranked by IRIS (Inequality Ranking and Inference System) model and translated into Lean 4 for formal verification. The result is a flexible architecture capable of generating precise\, human-readable\, and logically rigorous conjectures with minimal manual intervention.\nWe showcase results across a range of mathematical areas\, including graph theory\, polyhedral theory\, number theory\, and—for the first time—conjectures in string theory\, derived from the dataset of complete intersection Calabi–Yau (CICY) threefolds. Together\, these developments suggest that with the right blend of structure\, strategy\, and aesthetic\, machines can generate conjectures that not only withstand scrutiny but invite it—offering a glimpse into a future where AI contributes meaningfully to the creative process of mathematics.\n\n\n3:45–4:00 pm\nBreak\n\n\n4:00–5:00 pm\nEric Ramos\, Stevens Institute of Technology\nTitle: An AI approach to a conjecture of Erdos\nAbstract: Given a graph G\, its independence sequence is the integral sequence a_1\,a_2\,…\,a_n\, where a_i is the number of independent sets of vertices of size i. In the 90’s Erdos and coauthors showed that this sequence need not be unimodal for general graphs\, but conjectured that it is always unimodal whenever G is a tree. This conjecture was then naturally generalized to claim that the independence sequence of trees should be log concave\, in the sense that a_i^2 is always above a_{i-1}a_{i+1}. This stronger version of the conjecture was shown to hold for all trees of at most 25 vertices. In 2023\, however\, using improved computational power and a considerably more efficient algorithm\, Kadrawi\, Levit\, Yosef\, and Mirzrachi proved that there were exactly two trees on 26 vertices whose independence sequence was not log concave. They also showed how these two examples could be generalized to create two families of trees whose members are all not log concave. Finally\, in early 2025\, Galvin provided a family of trees with the property that for any chosen positive integer k\, there is a member T of the family where log concavity breaks at index alpha(T) – k\, where alph(T) is the independence number of T. Outside of these three families\, not much else was known about what causes log concavity to break.In this presentation\, I will discuss joint work of myself and Shiqi Sun\, where we used the PatternBoost architecture to train a machine to find counter-examples to the log concavity conjecture. We will discuss the successes of this approach – finding tens of thousands of new counter-examples with vertex set sizes varying from 27 to 101 – and some of its fascinating failures.\n\n\n\n  \nTuesday\, Sep. 9\, 2025 \n\n\n\n9:00–9:30 am\nMorning refreshments\n\n\n9:30–10:30 am\nMaria Prat Colomer\, Brown University\nTitle: From PINNs to Computer-Assisted Proofs for Fluid Dynamics\nAbstract: Physics-Informed Neural Networks (PINNs) have emerged as an alternative to traditional numerical methods for solving partial differential equations (PDEs). We apply PINNs to the study of low regularity problems in fluid dynamics\, focusing on the incompressible 2D Euler equations. In particular\, we study V-states\, which are a class of weak\, non-smooth solutions for which the vorticity is the characteristic function of a domain that rotates with constant angular velocity. We have obtained an approximate solution of a limiting V-state using a PINN and we are currently working on a rigourous proof of the existence of a nearby solution through a computer-assisted proof. Our PINN-based numerical approximation significantly improves on traditional methods\, a key factor being the integration of prior mathematical knowledge of the problem to effectively explore the solution space.\n\n\n10:30–11:00 am\nBreak\n\n\n11:00 am–12:00 pm\nSebastian Racaniere\, Google DeepMind\nTitle: Generative models and high dimensional symmetries: the case of Lattice QCD\nAbstract: Applying normalizing flows\, a machine learning technique for mapping distributions\, to Lattice QCD offers a promising route to enhance simulations and overcome limitations of traditional methods like Hybrid Monte Carlo. LQCD aims to compute expectation values of observables from an intractable distribution defined over a lattice of fields. Normalizing flows can learn this complex distribution and generate new configurations\, improving efficiency and addressing challenges such as critical slowing down and topological freezing. Topological freezing\, in particular\, traps simulations in local minima and prevents exploration of the full configuration space\, affecting accuracy. This approach incorporates the symmetries of LQCD through gauge equivariant flows\, leading to successful definitions and good effective sample sizes on smaller lattices. Beyond accelerating configuration generation\, normalizing flows also find application in variance reduction for observable calculation and exploring phenomena at different scales within LQCD. While further research is needed to apply these methods at the scale of state-of-the-art LQCD calculations\, these advancements hold significant potential to improve the accuracy\, efficiency\, and reach of future simulations.\n\n\n12:00–1:30 pm\nLunch break\n\n\n1:30–2:30 pm\nSergei Gukov\, Caltech\nTitle: On sparse reward problems in mathematics\nAbstract: An alternative title for this talk could be “Learning Hardness.” To see why\, we will explore some long-standing open problems in mathematics and examine what makes them hard from a computational perspective. We will argue that\, in many cases\, the difficulty arises from a highly uneven distribution of hardness within families of related problems\, where the truly hard cases lie far out in the tail. We will then discuss how recent advances in AI may provide new tools to tackle these challenges. Based in part on the recent work with A.Shehper\, A.Medina-Mardones\, L.Fagan\, B.Lewandowski\, A.Gruen\, Y.Qiu\, P.Kucharski\, and Z.Wang.\n\n\n2:30–2:45 pm\nBreak\n\n\n2:45–3:45 pm\nAlyson Deines\, IDA-CCR La Jolla; Tamara Veenstra\, IDA-CCR La Jolla; Joanna Bieri\, University of Redlands\nTitle: Machine learning $L$-functions\nAbstract: We study the vanishing order of rational $L$-functions and Maass form $L$-functions from a data scientific perspective. Each $L$-function is represented by finitely many Dirichlet coefficients\, the normalization of which depends on the context. We observe murmurations by averaging over these datasets. For rational $L$-functions\, we find that PCA clusters rational $L$-functions by their vanishing order and record that LDA and neural networks may accurately predict this quantity. For Maass form $L$-functions\, while PCA does not cluster these $L$-functions\, we still find that LDA and neural networks may accurately predict this quantity.\n\n\n3:45–4:00 pm\nBreak\n\n\n4:00–5:00 pm\nMark Hughes\, Brigham Young University\nTitle: Modelling the concordance group via contrastive learning\nAbstract: The concordance group of knots in 3-space is an abelian group formed by the equivalence classes of knots under the relation of concordance\, where two knots are concordant if they are the boundary of a smooth annulus properly embedded in the 4-dimensional product space S^3 x I. Though studied since 1966\, properties of the concordance groups (and even the recognition problem of deciding when a knot is null-concordant\, or slice) are difficult to study. In this talk I will outline ongoing attempts to model the concordance group using contrastive learning. This is joint work with Onkar Singh Gujral.\n\n\n\n  \n  \nWednesday Sep. 10\, 2025 \n\n\n\n9:00–9:30 am\nMorning refreshments\n\n\n9:30–10:30 am\nYang-Hui He\, University of Oxford (Via Zoom)\nTitle: AI for Mathematics: Bottom-up\, Top-Down\, Meta-\nAbstract: We argue how AI can assist mathematics in three ways: theorem-proving\, conjecture formulation\, and language processing. Inspired by initial experiments in geometry and string theory in 2017\, we summarize how this emerging field has grown over the past years\, and show how various machine-learning algorithms can help with pattern detection across disciplines ranging from algebraic geometry to representation theory\, to combinatorics\, and to number theory. At the heart of the programme is the question how does AI help with theoretical discovery\, and the implications for the future of mathematics.\n\n\n10:30–11:00 am\nBreak\n\n\n11:00 am–12:00 pm\nGiorgi Butbaia\, University of New Hampshire\nTitle: Computational String Theory using Machine Learning\nAbstract: Calabi-Yau compactifications of the $E_8\times E_8$ heterotic string provide a promising route to recovering the four-dimensional particle physics described by the Standard Model. While the topology of the Calabi-Yau space determines the overall matter content in the low-energy effective field theory\, further details of the compactification geometry are needed to calculate the normalized physical couplings and masses of elementary particles. In this talk\, we present novel numerical techniques for computing physically normalized Yukawa couplings in a number of heterotic models in the standard embedding using geometric machine learning and equivariant neural networks. We observe that the results produced using these techniques are in excellent agreement with the expected values for certain special cases\, where the answers are known. In the case of the Tian-Yau manifold\, which defines a model with three generations and has $h^{2\,1}>1$\, we provide a first-of-its-kind calculation of the normalized Yukawa couplings. As part of this work\, we have developed a Python library called cymyc\, which streamlines calculation of the Calabi-Yau metric and the Yukawa couplings on arbitrary Calabi-Yau manifolds that are realized as complete intersections and provides a framework for studying the differential geometric properties\, such as the curvature.\n\n\n12:00–1:30 pm\nLunch break\n\n\n1:30–2:30 pm\nEric Mjolsness\, UC Irvine\nTitle: Graph operators for science-applied AI/ML\nAbstract: Scalable\, structured graphs play a central role in mathematical problem definition for scientific applications of artificial intelligence and machine learning. Qualitatively diverse kinds of operators are necessary to bring these graphs to life. Continuous-time processes govern the evolution of spatial graph embeddings and other graph-local differential equation systems\, as well as the flow of probability between locally similar graph structures in a probabilistic Fock space\, according to rules in a dynamical graph grammar (DGG). Both kinds of dynamics have biophysical application eg. to dynamic cytoskeleton\, and both obey graph-centric time-evolution operators in an operator algebra that can be differentiated for learning. On the other hand coarse-scale discrete jumps in graph structure such as global mesh refinement can be modeled with a “graph lineage”: a sequence of sparsely interrelated graphs whose size grows roughly exponentially with level number. Graph lineages permit the definition of substantially more cost-efficient skeletal graph products\, as versions of classic binary graph operators such as the Cartesian product and direct product of graphs\, with analogous but not identical properties. Application to deep neural networks and to multigrid numerical methods are shown.\nThese two graph operator frameworks are interrelated. Further graph lineage operators allow the definition of graph frontier spaces\, accommodating graph grammars and supporting the definition of skeletal graph-graph function spaces. In return\, “confluent” graph grammars e.g. for adaptive mesh generation permit the definition of graph lineages through iteration. I will also sketch the design of compatible AI for Science systems that may exploit DGGs.\nJoint work with Cory Scott and Matthew Hur.\n\n\n2:30–3:00 pm\nBreak\n\n\n3:00–5:00 pm\nPanel and Discussion Group: Jordan Ellenberg\, Tamara Veenstra\, Sébastien Racaniere\, Kyu-Hwan Lee\, Sergei Gukov\n\n\n\n  \n\n  \n  \n 
URL:https://cmsa.fas.harvard.edu/event/mml_2025/
LOCATION:CMSA 20 Garden Street Cambridge\, Massachusetts 02138 United States
CATEGORIES:Event,Workshop
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/MML_Reunion_poster.2.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250911T090000
DTEND;TZID=America/New_York:20250912T170000
DTSTAMP:20260507T001715
CREATED:20250502T175902Z
LAST-MODIFIED:20251026T044243Z
UID:10003743-1757581200-1757696400@cmsa.fas.harvard.edu
SUMMARY:Big Data Conference 2025
DESCRIPTION:Big Data Conference 2025 \nDates: Sep. 11–12\, 2025 \nLocation: Harvard University CMSA\, 20 Garden Street\, Cambridge & via Zoom \nThe Big Data Conference features speakers from the Harvard community as well as scholars from across the globe\, with talks focusing on computer science\, statistics\, math and physics\, and economics. \nInvited Speakers \n\nMarkus J. Buehler\, MIT\nYiling Chen\, Harvard\nJordan Ellenberg\, UW Madison\nYue M. Lu\, Harvard\nPankaj Mehta\, BU\nNick Patterson\, Harvard\nGautam Reddy\, Princeton\nTrevor David Rhone\, Rensselaer Polytechnic Institute\nTess Smidt\, MIT\n\nOrganizers: \nMichael M. Desai\, Harvard OEB |  Michael R. Douglas\, Harvard CMSA | Yannai A. Gonczarowski\, Harvard Economics | Efthimios Kaxiras\, Harvard Physics | Melanie Weber\, Harvard SEAS \n  \nBig Data Youtube Playlist \n  \nSchedule \nThursday\, Sep. 11\, 2025 \n  \n\n\n\n9:00 am\nRefreshments\n\n\n9:30 am\nIntroductions\n\n\n9:45–10:45 am\nGautam Reddy\, Princeton \nTitle: Global epistasis in genotype-phenotype maps\n\n\n10:45–11:00 am\nBreak\n\n\n11:00 am –12:00 pm\nNick Patterson\, Harvard \nTitle: The Origin of the Indo-Europeans \nAbstract: Indo-European is the largest family of human languages\, with very wide geographical distribution and more than 3 billion native speakers. How did this family arise and spread? This question has been discussed for nearly 250 years but with the advent of the availability of DNA from ancient fossils is now largely understood\, at least in broad outlines. We will describe what we now know about the origins.\n\n\n12:00–1:30 pm\nLunch break\n\n\n1:30–2:30 pm\nMarkus Buehler\, MIT \nTitle: Superintelligence for scientific discovery \nAbstract: AI is moving beyond prediction to become a partner in invention. While today’s models excel at interpolating within known data\, true discovery requires stepping outside existing truths. This talk introduces superintelligent discovery engines built on multi-agent swarms: diverse AI agents that interact\, compete\, and cooperate to generate structured novelty. Guided by Gödel’s insight that no closed system is complete\, these swarms create gradients of difference – much like temperature gradients in thermodynamics – that sustain flow\, invention\, and surprise. Case studies in protein design and music composition show how swarms escape data biases\, invent novel structures\, and weave long-range coherence\, producing creativity that rivals human processes. By moving from “big data” to “big insight”\, these systems point toward a new era of AI that composes knowledge across science\, engineering\, and the arts.\n\n\n2:30–2:45 pm\nBreak\n\n\n2:45–3:45 pm\nJordan Ellenberg\, UW Madison \nTitle: What does machine learning have to offer mathematics?\n\n\n3:45–4:00 pm\nBreak\n\n\n4:00–5:00 pm\nPankaj Mehta\, Boston University \nTitle: Thinking about high-dimensional biological data in the age of AI \nAbstract: The molecular biology revolution has transformed our view of living systems. Scientific explanations of biological phenomena are now synonymous with the identification of the genes and proteins. The preeminence of the molecular paradigm has only become more pronounced as new technologies allow us to make measurements at scale. Combining this wealth of data with new artificial intelligence (AI) techniques is widely viewed as the future of biology. Here\, I will discuss the promise and perils of this approach. I will focus on our unpublished work with collaborators on two fronts: (i) transformer-based models for understanding genotype-to-phenotype maps\, and (ii) LLM-based ‘foundational models’ for cellular identity\, such as TranscriptFormer\, which is trained on single-cell RNA sequencing (scRNAseq) data. While LLMs excel at capturing complex evolutionary and demographic structure in DNA sequence data\, they are much less adept at elucidating the biology of cellular identity. We show that simple parameter-free models based on linear-algebra outperform TranscriptFormer on downstream tasks related to cellular identity\, even though TranscriptFormer has nearly a billion parameters. If time permits\, I will conclude by showing how we can combine ideas from linear algebra\, bifurcation theory\, and statistical physics to classify cell fate transitions using scRNAseq data.\n\n\n\n  \nFriday\, Sep. 12\, 2025  \n\n\n\n9:00-9:45 am\nRefreshments\n\n\n9:45–10:45 am\nYiling Chen\, Harvard \nTitle: Data Reliability Scoring \nAbstract: Imagine you are trying to make a data-driven decision\, but the data at hand may be noisy\, biased\, or even strategically manipulated. Can you assess whether such a dataset is reliable—without access to ground truth?\nWe initiate the study of reliability scoring for datasets reported by potentially strategic data sources. While the true data remain unobservable\, we assume access to auxiliary observations generated by an unknown statistical process that depends on the truth. We introduce the Gram Determinant Score\, a reliability measure that evaluates how well the reported data align with the unobserved truth\, using only the reported data and the auxiliary observations. The score comes with provable guarantees: it preserves several natural reliability orderings. Experimentally\, it effectively captures data quality in settings with synthetic noise and contrastive learning embeddings.\nThis talk is based on joint work with Shi Feng\, Fang-Yi Yu\, and Paul Kattuman.\n\n\n10:45–11:00 am\nBreak\n\n\n11:00 am –12:00 pm\nYue M. Lu\, Harvard \nTitle: Nonlinear Random Matrices in High-Dimensional Estimation and Learning \nAbstract: In recent years\, new classes of structured random matrices have emerged in statistical estimation and machine learning. Understanding their spectral properties has become increasingly important\, as these matrices are closely linked to key quantities such as the training and generalization performance of large neural networks and the fundamental limits of high-dimensional signal recovery. Unlike classical random matrix ensembles\, these new matrices often involve nonlinear transformations\, introducing additional structural dependencies that pose challenges for traditional analysis techniques. \nIn this talk\, I will present a set of equivalence principles that establish asymptotic connections between various nonlinear random matrix ensembles and simpler linear models that are more tractable for analysis. I will then demonstrate how these principles can be applied to characterize the performance of kernel methods and random feature models across different scaling regimes and to provide insights into the in-context learning capabilities of attention-based Transformer networks.\n\n\n12:00–1:30 pm\nLunch break\n\n\n1:30–2:30 pm\nTrevor David Rhone\, Rensselaer Polytechnic Institute \nTitle: Accelerating the discovery of van der Waals quantum materials using AI \nAbstract: van der Waals (vdW) materials are exciting platforms for studying emergent quantum phenomena\, ranging from long-range magnetic order to topological order. A conservative estimate for the number of candidate vdW materials exceeds ~106 for monolayers and ~1012 for heterostructures. How can we accelerate the exploration of this entire space of materials? Can we design quantum materials with desirable properties\, thereby advancing innovation in science and technology? A recent study showed that artificial intelligence (AI) can be harnessed to discover new vdW Heisenberg ferromagnets based on Cr2Ge2Te6 [1]\, [2] and magnetic vdW topological insulators based on MnBi2Te4 [3]. In this talk\, we will harness AI to efficiently explore the large chemical space of vdW materials and to guide the discovery of vdW materials with desirable spin and charge properties. We will focus on crystal structures based on monolayer Cr2I6 of the form A2X6\, which are studied using density functional theory (DFT) calculations and AI. Magnetic properties\, such as the magnetic moment are determined. The formation energy is also calculated and used as a proxy for the chemical stability. We also investigate monolayers based on MnBi2Te4 of the form AB2X4 to identify novel topological materials. Further to this\, we study heterostructures based on MnBi2Te4/Sb2Te3 stacks. We show that AI\, combined with DFT\, can provide a computationally efficient means to predict the thermodynamic and magnetic properties of vdW materials [4]\,[5]. This study paves the way for the rapid discovery of chemically stable vdW quantum materials with applications in spintronics\, magnetic memory and novel quantum computing architectures.\n[1]        T. D. Rhone et al.\, “Data-driven studies of magnetic two-dimensional materials\,” Sci. Rep.\, vol. 10\, no. 1\, p. 15795\, 2020.\n[2]        Y. Xie\, G. Tritsaris\, O. Granas\, and T. Rhone\, “Data-Driven Studies of the Magnetic Anisotropy of Two-Dimensional Magnetic Materials\,” J. Phys. Chem. Lett.\, vol. 12\, no. 50\, pp. 12048–12054.\n[3]        R. Bhattarai\, P. Minch\, and T. D. Rhone\, “Investigating magnetic van der Waals materials using data-driven approaches\,” J. Mater. Chem. C\, vol. 11\, p. 5601\, 2023.\n[4]        T. D. Rhone et al.\, “Artificial Intelligence Guided Studies of van der Waals Magnets\,” Adv. Theory Simulations\, vol. 6\, no. 6\, p. 2300019\, 2023.\n[5]        P. Minch\, R. Bhattarai\, K. Choudhary\, and T. D. Rhone\, “Predicting magnetic properties of van der Waals magnets using graph neural networks\,” Phys. Rev. Mater.\, vol. 8\, no. 11\, p. 114002\, Nov. 2024.\nThis work used the Extreme Science and Engineering Discovery Environment (XSEDE)\, which is supported by National Science Foundation Grant No. ACI-1548562. This research used resources of the Argonne Leadership Computing Facility\, which is a DOE Office of Science User Facility supported under Contract No. DE-AC02-06CH11357. This material is based on work supported by the National Science Foundation CAREER award under Grant No. 2044842.\n\n\n2:30–2:45 pm\nBreak\n\n\n2:45–3:45 pm\nTess Smidt\, MIT \nTitle: Applications of Euclidean neural networks to understand and design atomistic systems \nAbstract: Atomic systems (molecules\, crystals\, proteins\, etc.) are naturally represented by a set of coordinates in 3D space labeled by atom type. This poses a challenge for machine learning due to the sensitivity of coordinates to 3D rotations\, translations\, and inversions (the symmetries of 3D Euclidean space). Euclidean symmetry-equivariant Neural Networks (E(3)NNs) are specifically designed to address this issue. They faithfully capture the symmetries of physical systems\, handle 3D geometry\, and operate on the scalar\, vector\, and tensor fields that characterize these systems. \nE(3)NNs have achieved state-of-the-art results across atomistic benchmarks\, including small-molecule property prediction\, protein-ligand binding\, force prediciton for crystals\, molecules\, and heterogeneous catalysis. By merging neural network design with group representation theory\, they provide a principled way to embed physical symmetries directly into learning. In this talk\, I will survey recent applications of E(3)NNs to materials design and highlight ongoing debates in the AI for atomistic sciences community: how to balance the incorporation of physical knowledge with the drive for engineering efficiency.\n\n\n\n 
URL:https://cmsa.fas.harvard.edu/event/bigdata_2025/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Big Data Conference,Conference,Event
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Big-Data-2025_11x17.9-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250915T090000
DTEND;TZID=America/New_York:20250918T170000
DTSTAMP:20260507T001715
CREATED:20250710T134311Z
LAST-MODIFIED:20250930T154307Z
UID:10003755-1757926800-1758214800@cmsa.fas.harvard.edu
SUMMARY:The Geometry of Machine Learning
DESCRIPTION:The Geometry of Machine Learning \nDates: September 15–18\, 2025 \nLocation: Harvard CMSA\, Room G10\, 20 Garden Street\, Cambridge MA 02138 \nDespite the extraordinary progress in large language models\, mathematicians suspect that other dimensions of intelligence must be defined and simulated to complete the picture. Geometric and symbolic reasoning are among these. In fact\, there seems to be much to learn about existing ML by considering it from a geometric perspective\, e.g. what is happening to the data manifold as it moves through a NN?  How can geometric and symbolic tools be interfaced with LLMs? A more distant goal\, one that seems only approachable through AIs\, would be to gain some insight into the large-scale structure of mathematics as a whole: the geometry of math\, rather than geometry as a subject within math. This conference is intended to begin a discussion on these topics. \nSpeakers \n\nMaissam Barkeshli\, University of Maryland\nEve Bodnia\, Logical Intelligence\nAdam Brown\, Stanford\nBennett Chow\, USCD & IAS\nMichael Freedman\, Harvard CMSA\nElliot Glazer\, Epoch AI\nJames Halverson\, Northeastern\nJesse Han\, Math Inc.\nJunehyuk Jung\, Brown University\nAlex Kontorovich\, Rutgers University\nYann Lecun\, New York University & META*\nJared Duker Lichtman\, Stanford  & Math Inc.\nBrice Ménard\, Johns Hopkins\nMichael Mulligan\, UCR & Logical Intelligence\nPatrick Shafto\, DARPA & Rutgers University\n\nOrganizers: Michael R. Douglas (CMSA) and Mike Freedman (CMSA) \n  \nGeometry of Machine Learning Youtube Playlist \n  \nSchedule \nMonday\, Sep. 15\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nJames Halverson\, Northeastern \nTitle: Sparsity and Symbols with Kolmogorov-Arnold Networks \nAbstract: In this talk I’ll review Kolmogorov-Arnold nets\, as well as new theory and applications related to sparsity and symbolic regression\, respectively.  I’ll review essential results regarding KANs\, show how sparsity masks relate deep nets and KANs\, and how KANs can be utilized alongside multimodal language models for symbolic regression. Empirical results will necessitate a few slides\, but the bulk will be chalk.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nMaissam Barkeshli\, University of Maryland \nTitle: Transformers and random walks: from language to random graphs \nAbstract: The stunning capabilities of large language models give rise to many questions about how they work and how much more capable they can possibly get. One way to gain additional insight is via synthetic models of data with tunable complexity\, which can capture the basic relevant structures of real data. In recent work we have focused on sequences obtained from random walks on graphs\, hypergraphs\, and hierarchical graphical structures. I will present some recent empirical results for work in progress regarding how transformers learn sequences arising from random walks on graphs. The focus will be on neural scaling laws\, unexpected temperature-dependent effects\, and sample complexity.\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–1:00 pm\nAdam Brown\, Stanford \nTitle: LLMs\, Reasoning\, and the Future of Mathematical Sciences \nAbstract: Over the last half decade\, the mathematical capabilities of large language models (LLMs) have leapt from preschooler to undergraduate and now beyond. This talk reviews recent progress\, and speculates as to what it will mean for the future of mathematical sciences if these trends continue.\n\n\n\n  \nTuesday\, Sep. 16\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nJunehyuk Jung\, Brown University \nTitle: AlphaGeometry: a step toward automated math reasoning \nAbstract: Last summer\, Google DeepMind’s AI systems made headlines by achieving Silver Medal level performance on the notoriously challenging International Mathematical Olympiad (IMO) problems. For instance\, AlphaGeometry 2\, one of these remarkable systems\, solved the geometry problem in a mere 19 seconds! \nIn this talk\, we will delve into the inner workings of AlphaGeometry\, exploring the innovative techniques that enable it to tackle intricate geometric puzzles. We will uncover how this AI system combines the power of neural networks with symbolic reasoning to discover elegant solutions.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nBennett Chow\, USCD and IAS \nTitle: Ricci flow as a test for AI\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–1:00 pm\nJared Duker Lichtman\, Stanford & Math Inc. and Jesse Han\, Math Inc. \nTitle: Gauss – towards autoformalization for the working mathematician \nAbstract: In this talk we’ll highlight some recent formalization progress using a new agent – Gauss. We’ll outline a recent Lean proof of the Prime Number Theorem in strong form\, completing a challenge set in January 2024 by Alex Kontorovich and Terry Tao. We hope Gauss will help assist working mathematicians\, especially those who do not write formal code themselves.\n\n\n5:00–6:00 pm\nSpecial Lecture: Yann LeCun\, Science Center Hall C\n\n\n\n  \nWednesday\, Sep. 17\, 2025 \n\n\n\n8:30–9:00 am\nRefreshments\n\n\n9:00–10:00 am\nMichael Mulligan\, UCR and Logical Intelligence \nTitle: Spontaneous Kolmogorov-Arnold Geometry in Vanilla Fully-Connected Neural Networks \nAbstract: The Kolmogorov-Arnold (KA) representation theorem constructs universal\, but highly non-smooth inner functions (the first layer map) in a single (non-linear) hidden layer neural network. Such universal functions have a distinctive local geometry\, a “texture\,” which can be characterized by the inner function’s Jacobian\, $J(\mathbf{x})$\, as $\mathbf{x}$ varies over the data. It is natural to ask if this distinctive KA geometry emerges through conventional neural network optimization. We find that indeed KA geometry often does emerge through the process of training vanilla single hidden layer fully-connected neural networks (MLPs). We quantify KA geometry through the statistical properties of the exterior powers of $J(\mathbf{x})$: number of zero rows and various observables for the minor statistics of $J(\mathbf{x})$\, which measure the scale and axis alignment of $J(\mathbf{x})$. This leads to a rough phase diagram in the space of function complexity and model hyperparameters where KA geometry occurs. The motivation is first to understand how neural networks organically learn to prepare input data for later downstream processing and\, second\, to learn enough about the emergence of KA geometry to accelerate learning through a timely intervention in network hyperparameters. This research is the “flip side” of KA-Networks (KANs). We do not engineer KA into the neural network\, but rather watch KA emerge in shallow MLPs.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nEve Bodnia\, Logical Intelligence \nTitle: \nAbstract: We introduce a method of topological analysis on spiking correlation networks in neurological systems. This method explores the neural manifold as in the manifold hypothesis\, which posits that information is often represented by a lower-dimensional manifold embedded in a higher-dimensional space. After collecting neuron activity from human and mouse organoids using a micro-electrode array\, we extract connectivity using pairwise spike-timing time correlations\, which are optimized for time delays introduced by synaptic delays. We then look at network topology to identify emergent structures and compare the results to two randomized models – constrained randomization and bootstrapping across datasets. In histograms of the persistence of topological features\, we see that the features from the original dataset consistently exceed the variability of the null distributions\, suggesting that the observed topological features reflect significant correlation patterns in the data rather than random fluctuations. In a study of network resiliency\, we found that random removal of 10 % of nodes still yielded a network with a lesser but still significant number of topological features in the homology group H1 (counts 2-dimensional voids in the dataset) above the variability of our constrained randomization model; however\, targeted removal of nodes in H1 features resulted in rapid topological collapse\, indicating that the H1 cycles in these brain organoid networks are fragile and highly sensitive to perturbations. By applying topological analysis to neural data\, we offer a new complementary framework to standard methods for understanding information processing across a variety of complex neural systems.\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–1:00 pm\nAlex Kontorovich\, Rutgers University \nTitle: The Shape of Math to Come \nAbstract: We will discuss some ongoing experiments that may have meaningful impact on what working in research mathematics might look like in a decade (if not sooner).\n\n\n5:00–6:00 pm\nMike Freedman Millennium Lecture: The Poincaré Conjecture and Mathematical Discovery (Science Center Hall D)\n\n\n\n  \nThursday\, Sep. 18\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nElliott Glazer\, Epoch AI \nTitle: FrontierMath to Infinity \nAbstract: I will discuss FrontierMath\, a mathematical problem solving benchmark I developed over the past year\, including its design philosophy and what we’ve learned about AI’s trajectory from it. I will then look much further out\, speculate about what a “perfectly efficient” mathematical intelligence should be capable of\, and discuss how high-ceiling math capability metrics can illuminate the path towards that ideal.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nBrice Ménard\, Johns Hopkins \nTitle:Demystifying the over-parametrization of neural networks \nAbstract: I will show how to estimate the dimensionality of neural encodings (learned weight structures) to assess how many parameters are effectively used by a neural network. I will then show how their scaling properties provide us with fundamental exponents on the learning process of a given task. I will comment on connections to thermodynamics.\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–12:30 pm\nPatrick Shafto\, Rutgers \nTitle: Math for AI and AI for Math \nAbstract: I will briefly discuss two DARPA programs aiming to deepen connections between mathematics and AI\, specifically through geometric and symbolic perspectives. The first aims for mathematical foundations for understanding the behavior and performance of modern AI systems such as Large Language Models and Diffusion models. The second aims to develop AI for pure mathematics through an understanding of abstraction\, decomposition\, and formalization. I will close with some thoughts on the coming convergence between AI and math.\n\n\n12:30–12:45 pm\nBreak\n\n\n12:45–2:00 pm\nMike Freedman\, Harvard CMSA \nTitle: How to think about the shape of mathematics \nFollowed by group discussion \n \n\n\n\n  \n  \n  \nSupport provided by Logical Intelligence. \n \n  \n 
URL:https://cmsa.fas.harvard.edu/event/mlgeometry/
LOCATION:CMSA 20 Garden Street Cambridge\, Massachusetts 02138 United States
CATEGORIES:Conference,Event
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/GML_2025.7-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250916T170000
DTEND;TZID=America/New_York:20250916T180000
DTSTAMP:20260507T001715
CREATED:20250807T142820Z
LAST-MODIFIED:20250922T134159Z
UID:10003760-1758042000-1758045600@cmsa.fas.harvard.edu
SUMMARY:Geometry of Machine Learning Special Lecture: Yann LeCun
DESCRIPTION:Geometry of Machine Learning Special Lecture: Yann LeCun \nTitle: Self-Supervised Learning\, JEPA\, World Models\, and the future of AI \nDate: Tuesday\, Sep. 16\, 2025 \nTime: 5:00 pm ET \nLocation: Harvard Science Center\, Hall C & via Zoom Webinar
URL:https://cmsa.fas.harvard.edu/event/lecun91625/
LOCATION:Harvard Science Center\, 1 Oxford Street\, Cambridge\, MA\, 02138
CATEGORIES:Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/YannLeCun_GML-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250917T170000
DTEND;TZID=America/New_York:20250917T180000
DTSTAMP:20260507T001715
CREATED:20250311T134916Z
LAST-MODIFIED:20251010T115024Z
UID:10003656-1758128400-1758132000@cmsa.fas.harvard.edu
SUMMARY:Millennium Prize Problems Lecture - Michael Freedman: The Poincaré Conjecture and Mathematical Discovery  
DESCRIPTION:Millennium Prize Problems Lecture\nDate: September 17\, 2025 \nLocation: Harvard Science Center Hall D & via Zoom Webinar \nTime: 5:00–6:00 pm \nSpeaker: Michael Freedman\, Harvard CMSA and Logical Intelligence  \nTitle: The Poincaré Conjecture and Mathematical Discovery   \nAbstract: The AI age requires us to re-examine what mathematics is about. The Seven Millenium Problems provide an ideal lens for doing so. Five of the seven are core mathematical questions\, two are meta-mathematical – asking about the scope of mathematics. The Poincare conjecture represents one of the core subjects\, manifold topology. I’ll explain what it is about\, its broader context\, and why people cared so much about finding a solution\, which ultimately arrived through the work of R. Hamilton and G. Perelman. Although stated in manifold topology\, the proof requires vast developments in the theory of parabolic partial differential equations\, some of which I will sketch. Like most powerful techniques\, the methods survive their original objectives and are now deployed widely in both three- and four-dimensional manifold topology.  \n  \nRead more about the Poincaré Conjecture at the Clay Math website. \nOrganizers: Martin Bridson\, Clay Mathematics Institute | Dan Freed\, Harvard University and CMSA | Mike Hopkins\, Harvard University \n\n                   \n\nMillennium Prize Problems Lecture Series
URL:https://cmsa.fas.harvard.edu/event/clay_91725/
LOCATION:Harvard Science Center Hall D\, 1 Oxford Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Millennium Prize Problems Lecture,Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Freedman_web_ad.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251006T090000
DTEND;TZID=America/New_York:20251010T170000
DTSTAMP:20260507T001715
CREATED:20250502T180256Z
LAST-MODIFIED:20260422T160144Z
UID:10003747-1759741200-1760115600@cmsa.fas.harvard.edu
SUMMARY:Mathematical foundations of AI
DESCRIPTION:Mathematical foundations of AI \nDate: October 6–10\, 2025 \nLocation: Harvard CMSA\, Room G10\, 20 Garden Street\, Cambridge MA & via Zoom \nArtificial intelligence (AI) has achieved unprecedented advances\, yet our theoretical understanding lags significantly behind. This gap poses a significant obstacle to improving AI’s safety and reliability. Since the classical tools of learning theory have proven insufficient for understanding AI\, researchers are now drawing insights from a vast array of fields—including functional analysis\, probability theory\, optimal transport\, optimization\, PDEs\, information theory\, geometry\, statistics\, electrical engineering\, and ergodic theory. Those interdisciplinary efforts are gradually shedding light on the underlying principles governing modern AI. This workshop centers around these mathematical and interdisciplinary developments. It will feature a series of talks from people in various subfields. Open problem and small-group sessions will help foster new connections and new research avenues. \n  \n Speakers \n\nJason Altschuler\, University of Pennsylvania\nGuy Bresler\, MIT\nSinho Chewi\, Yale University\nLenaic Chizat\, EPFL\nNabarun Deb\, University of Chicago\nEdgar Dobriban\, University of Pennsylvania\nAhmed El Alaoui\, Cornell University\nZhou Fan\, Yale University\nBoris Hanin\, Princeton University\nJason Klusowski\, Princeton University\nTengyu Ma\, Stanford University\nAlexander Rakhlin\, MIT\nYuting Wei\, University of Pennsylvania\nTijana Zrnic\, Stanford University\n\nOrganizer: Morgane Austern\, Harvard Statistics \n  \nSchedule \nMonday\, Oct. 6\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nYuting Wei\, U Penn \nTo Intrinsic Dimension and Beyond: Efficient Sampling in Diffusion Models \nThe denoising diffusion probabilistic model (DDPM) has become a cornerstone of generative AI. While sharp convergence guarantees have been established for DDPM\, the iteration complexity typically scales with the ambient data dimension of target distributions\, leading to overly conservative theory that fails to explain its practical efficiency. This has sparked recent efforts to understand how DDPM can achieve sampling speed-ups through automatic exploitation of intrinsic low dimensionality of data. This talk explores two key scenarios: (1) For a broad class of data distributions with intrinsic dimension k\, we prove that the iteration complexity of the DDPM scales nearly linearly with k\, which is optimal under the KL divergence metric; (2) For mixtures of Gaussian distributions with k components\, we show that DDPM learns the distribution with iteration complexity that grows only logarithmically in k. These results provide theoretical justification for the practical efficiency of diffusion models.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nJason Klusowski\, Princeton \nThe Value of Side Information in Unlabeled Data \nPractitioners often work in settings with limited labeled data and abundant unlabeled data. During training\, they may even have access to extra side information (some labeled\, some not) that won’t be available once the model is deployed. When can this side information actually improve performance? I’ll present a simple framework where a rich-view model that sees the extra features generates pseudo-labels on the large unlabeled data\, and a deployment model that only sees the standard features is trained on both real and pseudo-labels. The two are trained iteratively: each deployment model update calibrates the next round of pseudo-labels\, and those refined pseudo-labels in turn guide the deployment model. Our theory shows that side information helps precisely when the rich-view and deployment models make different kinds of errors. We formalize this with a decorrelation score that quantifies how independent those errors are; the more independent\, the greater the performance gains.\n\n\n11:3 0am–12:00 pm\nBreak\n\n\n12:00–1:00 pm\nGuy Bresler\, MIT \nGlobal Minimizers of Sigmoid Contrastive Loss \nThe meta-task of obtaining and aligning representations through contrastive pre-training is steadily gaining importance since its introduction in CLIP and ALIGN. In this paper we theoretically explain the advantages of synchronizing with trainable inverse temperature and bias under the sigmoid loss\, as implemented in the recent SigLIP models of Google DeepMind. Temperature and bias can drive the loss function to zero for a rich class of configurations that we call (m\,b)-Constellations. (m\,b)-Constellations are a novel combinatorial object related to spherical codes and are parametrized by a margin m and relative bias b. We use our characterization of constellations to theoretically justify the success of SigLIP on retrieval\, to explain the modality gap present in SigLIP\, and to identify the necessary dimension for producing high-quality representations. We also propose a reparameterization of the sigmoid loss with explicit relative bias\, which appears to improve training dynamics. Joint work with Kiril Bangachev\, Iliyas Noman\, and Yury Polyanskiy.\n\n\n\n  \nTuesday\, Oct. 7\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nLénaïc Chizat\, EPFL \nThe Hidden Width of Deep ResNets \nWe present a mathematical framework to analyze the training dynamics of deep ResNets that rigorously captures practical architectures (including Transformers) trained from standard random initializations. Our approach combines stochastic approximation of ODEs with propagation-of-chaos arguments. It yields three main insights:\n– Depth begets width: infinite-depth ResNets of any hidden width behave throughout training as if they were infinitely wide;\n– Unified phase diagram: the phase diagram of Transformers mirrors that of two-layer perceptrons\, once the appropriate substitutions are made;\n– Optimal shape scaling: for a given parameter budget P\, a Transformer with optimal shape converges to its limiting dynamics at rate P^{-1/6}.\nThis is based on https://arxiv.org/abs/2509.10167\n\n\n10:00–10:30 am\nBreak \n \n\n\n10:30–11:30 am\nBoris Hanin\, Princeton \nKernel Learning on Manifolds \nThis talk concerns the L_2 risk of minimum norm interpolation with n samples in the RKHS of a kernel K. Unlike most prior work in this space our kernels will be defined on any close d-dimensional Riemannian manifold\, and we require only that the kernels are trace class and elliptic. With these assumptions we get nearly sharp L_2 risk bounds with high probability over the data. Like prior work on round spheres our results essentially say that the number of samples n\, the dimension of the manifold\, and some details of the kernel determine a natural spectral cutoff \lambda(n\,d\,K) and that minimal norm interpolation essentially learns exactly the projection of the data generating process onto the eigenfunctions of the Laplacian with frequency at most \lambda(n\,d\,K). Joint work with Mengxuan Yang.\n\n\n11:30–12:00\nBreak\n\n\n12:00–1:00\nZhou Fan\, Yale \nDynamical mean-field analysis of adaptive Langevin diffusions \nIn many applications of statistical estimation via sampling\, one may wish to sample from a high-dimensional target distribution that is adaptively evolving to the samples already seen. We study an example of such dynamics\, given by a Langevin diffusion for posterior sampling in a Bayesian linear regression model with i.i.d. regression design\, whose prior continuously adapts to the Langevin trajectory via a maximum marginal-likelihood scheme. Using techniques of dynamical mean-field theory (DMFT)\, we provide a precise characterization of a high-dimensional asymptotic limit for the joint evolution of the prior parameter and law of the Langevin sample. We then carry out an analysis of the equations that describe this DMFT limit\, under conditions of approximate time-translation-invariance which include\, in particular\, settings where the posterior law satisfies a log-Sobolev inequality. In such settings\, we show that this adaptive Langevin trajectory converges on a dimension-independent time horizon to an equilibrium state that is characterized by a system of replica-symmetric fixed-point equations\, and the associated prior parameter converges to a critical point of a replica-symmetric limit for the model free energy. We explore the nature of the free energy landscape and its critical points in a few simple examples\, where such critical points may or may not be unique.\n\n\n\n  \nWednesday\, Oct. 8\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nJason Altschuler\, U Penn \nNegative Stepsizes Make Gradient-Descent-Ascent Converge \nSolving min-max problems is a central question in optimization\, games\, learning\, and controls. Arguably the most natural algorithm is Gradient-Descent-Ascent (GDA)\, however since the 1970s\, conventional wisdom has argued that it fails to converge even on simple problems. This failure spurred the extensive literature on modifying GDA with extragradients\, optimism\, momentum\, anchoring\, etc. In contrast\, we show that GDA converges in its original form by simply using a judicious choice of stepsizes. The key innovation is the proposal of unconventional stepsize schedules that are time-varying\, asymmetric\, and (most surprisingly) periodically negative. We show that all three properties are necessary for convergence\, and that altogether this enables GDA to converge on the classical counterexamples (e.g.\, unconstrained convex-concave problems). The core intuition is that although negative stepsizes make backward progress\, they de-synchronize the min/max variables (overcoming the cycling issue of GDA) and lead to a slingshot phenomenon in which the forward progress in the other iterations is overwhelmingly larger. This results in fast overall convergence. Geometrically\, the slingshot dynamics leverage the non-reversibility of gradient flow: positive/negative steps cancel to first order\, yielding a second-order net movement in a new direction that leads to convergence and is otherwise impossible for GDA to move in. Joint work with Henry Shugart.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nNabarun Deb\, U Chicago \nGenerative Modeling via Parabolic Monge-Ampère PDEs \nWe introduce a novel generative modeling framework based on a discretized parabolic Monge-Ampère PDE\, which emerges as a continuous limit of the Sinkhorn algorithm commonly used in optimal transport. Our method performs iterative refinement in the space of Brenier maps using a mirror gradient descent step. We establish theoretical guarantees for generative modeling through the lens of no-regret analysis\, demonstrating that the iterates converge to the optimal Brenier map under a variety of step-size schedules. As a technical contribution\, we derive a new Evolution Variational Inequality tailored to the parabolic Monge-Ampère PDE\, connecting geometry\, transportation cost\, and regret. Our framework accommodates non-log-concave target distributions\, constructs an optimal sampling process via the Brenier map\, and integrates favorable learning techniques from generative adversarial networks and score-based diffusion models.\n\n\n11:30–12:00\nBreak\n\n\n12:00–1:00\nSinho Chewi\, Yale \nDiscretization and distribution learning in diffusion models \nFirst\, I will review some literature on discretization of diffusion models\, focusing on the use of randomized midpoints for deterministic vs. stochastic samplers. Then\, I will argue that such sampling guarantees reduce distribution learning\, in the form of learning to generate a sample\, to score matching. To complement this result\, we reduce other forms of distribution learning (parameter estimation and density estimation) to score matching as well. This leads to new consequences for diffusion models\, such as asymptotic efficiency of a DDPM-based parameter estimator and algorithms for Gaussian mixture density estimation\, as well as to a general approach for establishing cryptographic hardness results for score estimation.\n\n\n\n  \nThursday\, Oct. 9\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nAhmed El Alaoui\, Cornell \nHow abundant are good interpolators? \nWe consider classifying labelled data in the interpolation regime where there exist linear classifiers (with possibly negative margin) correctly classifying all points in the dataset. Under the logistic model with gaussian features\, we derive the large deviation rate function of the event that an interpolator chosen uniformly at random achieves a given generalization error. This describes the proportion of interpolators having any desired performance. We remark that in a wide regime of parameters\, the vast majority of interpolators have inferior performance than the one found via a simple linear programming procedure\, showing that the latter algorithm produces an atypically good classifier.\nThis is based on joint work with August Chen.\n\n\n10:00–10:30 am\nbreak\n\n\n10:30–11:30 am\nTengyu Ma\, Stanford \nSelf-play LLM Theorem Provers with Iterative Conjecturing and Proving \nI will discuss some works on using RL for theorem proving\, especially in the possible future regime where we ran out of high-quality training data. To keep improving the models with limited data\, we draw inspiration from mathematicians\, who continuously develop new results\, partly by proposing novel conjectures or exercises (which are often variants of known results) and attempting to solve them. We design the Self-play Theorem Prover (STP) that simultaneously takes on two roles\, conjecturer and prover\, each providing training signals to the other. The model achieves state-of-the-art performance among whole-proof generation methods on miniF2F-test (65.0%\, pass@3200)\, Proofnet-test (23.9%\, pass@3200) and PutnamBench (8/644\, pass@3200). \n \n\n\n11:30–12:00\nbreak\n\n\n12:00–1:00\nEdgar Dobriban\, U Penn \nLeveraging synthetic data in statistical inference \nThe rapid proliferation of high-quality synthetic data — generated by advanced AI models or collected as auxiliary data from related tasks — presents both opportunities and challenges for statistical inference. This paper introduces a GEneral Synthetic-Powered Inference (GESPI) framework that wraps around any statistical inference procedure to safely enhance sample efficiency by combining synthetic and real data. Our framework leverages high-quality synthetic data to boost statistical power\, yet adaptively defaults to the standard inference method using only real data when synthetic data is of low quality. The error of our method remains below a user-specified bound without any distributional assumptions on the synthetic data\, and decreases as the quality of the synthetic data improves. This flexibility enables seamless integration with conformal prediction\, risk control\, hypothesis testing\, and multiple testing procedures\, all without modifying the base inference method. We demonstrate the benefits of our method on challenging tasks with limited labeled data\, including AlphaFold protein structure prediction\, and comparing large reasoning models on complex math problems.\n\n\n\n  \nFriday\, Oct. 10\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nTijana Zrnic\, Stanford \nProbably Approximately Correct Labels \nObtaining high-quality labeled datasets is often costly\, requiring either extensive human annotation or expensive experiments. We propose a method that supplements such “expert” labels with AI predictions from pre-trained models to construct labeled datasets more cost-effectively. Our approach results in probably approximately correct labels: with high probability\, the overall labeling error is small. This solution enables rigorous yet efficient dataset curation using modern AI models. We demonstrate the benefits of the methodology through text annotation with large language models\, image labeling with pre-trained vision models\, and protein folding analysis with AlphaFold. This is joint work with Emmanuel Candes and Andrew Ilyas.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nAlexander Rakhlin\, MIT \nElements of Interactive Decision Making \nMachine learning methods are increasingly deployed in interactive environments\, ranging from dynamic treatment strategies in medicine to fine-tuning of LLMs using reinforcement learning. In these settings\, the learning agent interacts with the environment to collect data and necessarily faces an exploration-exploitation dilemma. We present a general framework for interactive decision making that subsumes multi-armed bandits\, contextual bandits\, structured bandits\, and reinforcement learning. We focus on both the statistical aspect of learning—aiming to develop a tight characterization of sample complexity in terms of properties of the class of models—and on the basic algorithmic primitives.\n\n\n\n  \n  \n\n  \n 
URL:https://cmsa.fas.harvard.edu/event/mathai/
LOCATION:CMSA 20 Garden Street Cambridge\, Massachusetts 02138 United States
CATEGORIES:Workshop
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/MathAI.5.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251015T170000
DTEND;TZID=America/New_York:20251015T180000
DTSTAMP:20260507T001715
CREATED:20250311T134919Z
LAST-MODIFIED:20251021T134849Z
UID:10003657-1760547600-1760551200@cmsa.fas.harvard.edu
SUMMARY:Millennium Prize Problems Lecture - Sourav Chatterjee: Yang-Mills and the foundations of quantum field theory
DESCRIPTION:Millennium Prize Problems Lecture  \nDate: October 15\, 2025 \nTime: 5:00–6:00 pm \nLocation: Harvard Science Center Hall D\, 1 Oxford St.\, Cambridge MA \nSpeaker: Sourav Chatterjee\, Stanford University \nTitle: Yang-Mills and the foundations of quantum field theory \nAbstract: Yang-Mills theories are the building blocks of the Standard Model of quantum mechanics\, which is the best available model for our universe at the quantum scale. Yet\, these theories do not have a rigorous mathematical foundation. Physical calculations are based on perturbation theory\, but there are various phenomena that are believed to be out of the reach of perturbative arguments. Building a mathematical foundation is\, therefore\, important even from the physics point of view. A program with this objective\, known as “constructive field theory”\, was initiated in the 1960s. In spite of many successes\, the program has not reached its original goal. Completing this program is the Clay Millennium Prize problem of Yang-Mills existence and mass gap. I will give a general introduction to the main questions\, and an overview of exciting recent progress that has rejuvenated the quest for a solution in the last ten years. \nRead more about the Yang-Mills Existence and Mass Gap at the Clay Math website. \nOrganizers: Martin Bridson\, Clay Mathematics Institute | Dan Freed\, Harvard University and CMSA | Mike Hopkins\, Harvard University \n\n                   \n\nMillennium Prize Problems Lecture Series
URL:https://cmsa.fas.harvard.edu/event/clay_101425/
LOCATION:Harvard Science Center Hall D\, 1 Oxford Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Millennium Prize Problems Lecture,Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Chatterjee_web_ad.2-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251020T163000
DTEND;TZID=America/New_York:20251020T173000
DTSTAMP:20260507T001715
CREATED:20250912T180641Z
LAST-MODIFIED:20251030T151928Z
UID:10003752-1760977800-1760981400@cmsa.fas.harvard.edu
SUMMARY:Math Science Lectures in Honor of Raoul Bott | Dennis Gaitsgory\, MPIM | Function-theoretic implications of geometric Langlands
DESCRIPTION:Two talks on Function-theoretic implications of geometric Langlands\nDates: October 20 & 21\, 2025 \nTime: 4:30–5:30 pm \nLocation: Science Center Lecture Hall A and via Webinar \n  \nSpeaker: Dennis Gaitsgory\, Max Planck Institute for Mathematics \nAbstract: The recently established geometric Langlands equivalence implies an explicit description of the space of (unramified) automorphic functions in terms of Langlands parameters. In these lectures\, we will derive these description and explain how far we can go with it in order to deduce some expected properties of automorphic functions\, e.g.\, Ramanujan and Arthur multiplicity conjectures. This is joint work with Vincent Lafforgue and Sam Raskin. \n  \nLecture 1: Monday\, October 20\, 2025\nFrom geometric to classical Langlands \n \n  \nLecture 2: Tuesday\, October 21\, 2025\nAnalytic properties of automorphic functions as seen from algebraic geometry \n \n  \n\nHarvard Mathematics Professor Raoul Bott (1923 – 2005)\, was a Hungarian-American mathematician known for numerous foundational contributions to geometry in its broad sense. He is best known for his Bott periodicity theorem\, the Morse–Bott functions which he used in this context\, and the Borel–Bott–Weil theorem.
URL:https://cmsa.fas.harvard.edu/event/mathscibott_2025/
LOCATION:Harvard Science Center\, 1 Oxford Street\, Cambridge\, MA\, 02138
CATEGORIES:Event,Math Science Lectures in Honor of Raoul Bott,Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Bott-Lecture_2025.v2-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251021T163000
DTEND;TZID=America/New_York:20251021T173000
DTSTAMP:20260507T001715
CREATED:20250912T180816Z
LAST-MODIFIED:20251030T152043Z
UID:10003753-1761064200-1761067800@cmsa.fas.harvard.edu
SUMMARY:Math Science Lectures in Honor of Raoul Bott | Dennis Gaitsgory\, MPIM | Function-theoretic implications of geometric Langlands
DESCRIPTION:Two talks on Function-theoretic implications of geometric Langlands\nDates: October 20 & 21\, 2025 \nTime: 4:30–5:30 pm \nLocation: Science Center Lecture Hall A and via Webinar \nSpeaker: Dennis Gaitsgory\, Max Planck Institute for Mathematics \nAbstract: The recently established geometric Langlands equivalence implies an explicit description of the space of (unramified) automorphic functions in terms of Langlands parameters. In these lectures\, we will derive these description and explain how far we can go with it in order to deduce some expected properties of automorphic functions\, e.g.\, Ramanujan and Arthur multiplicity conjectures. This is joint work with Vincent Lafforgue and Sam Raskin. \n  \nLecture 1: Monday\, October 20\, 2025\nFunction-theoretic implications of geometric Langlands: From geometric to classical Langlands \n \n  \nLecture 2: Tuesday\, October 21\, 2025\nFunction-theoretic implications of geometric Langlands: Analytic properties of automorphic functions as seen from algebraic geometry \n \n\nHarvard Mathematics Professor Raoul Bott (1923 – 2005)\, was a Hungarian-American mathematician known for numerous foundational contributions to geometry in its broad sense. He is best known for his Bott periodicity theorem\, the Morse–Bott functions which he used in this context\, and the Borel–Bott–Weil theorem.
URL:https://cmsa.fas.harvard.edu/event/mathscibott_2025-2/
LOCATION:Harvard Science Center\, 1 Oxford Street\, Cambridge\, MA\, 02138
CATEGORIES:Event,Math Science Lectures in Honor of Raoul Bott,Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Bott-Lecture_2025-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251112T090000
DTEND;TZID=America/New_York:20251114T170000
DTSTAMP:20260507T001715
CREATED:20250502T181545Z
LAST-MODIFIED:20251113T214753Z
UID:10003745-1762938000-1763139600@cmsa.fas.harvard.edu
SUMMARY:Geometry Meets Physics: Finiteness\, Tameness\, and Complexity
DESCRIPTION:Geometry Meets Physics: Finiteness\, Tameness\, and Complexity \nDates: November 12–14\, 2025 \nLocation: CMSA G10\, 20 Garden Street\, Cambridge MA 02138 \n(note: this event is in-person only) \nFiniteness is a fundamental property in consistent physical theories. From the earliest days of quantum field theory and string theory\, the drive to eliminate unphysical infinities has been a guiding principle. More recently\, finiteness has emerged as a key criterion for constraining effective theories that can be embedded in quantum gravity.  Formulating and testing these constraints remains a central challenge in current research. \nIn parallel\, mathematics has made remarkable advanced in addressing finiteness questions using tame geometry. Built on the framework of o-minimal structures\, tame geometry offers a precise language for describing objects of finite geometric complexity. Recent developments\, such as sharp o-minimality\, go further by introducing a quantitative notion of complexity\, opening new directions for analyzing finiteness in mathematics and physics alike. \nThis workshop brings together mathematicians and physicists to exchange ideas\, explore new perspectives\, and spark collaborations at the interface of geometry\, logic\, and fundamental physics. \nInvited Speakers \n\nVijay Balasubramanian (UPenn)\nGregorio Baldi (CNRS\, IMJ-PRG & IAS)\nGal Binyamini (Weizmann Institute & IAS)\nRaf Cluckers (Lille\, France)\nMatilda Delgado (Max Planck Institute Munich)\nBruno Klingler (Humboldt University\, Berlin & IAS)\nAdele Padgett (Vienna)\nDavid Prieto (Utrecht)\nWashington Taylor (MIT)\nDavid Urbanik (IHES\, France & IAS)\nCumrun Vafa (Harvard)\nMick van Vliet (Utrecht)\nBenny Zak (Weizmann Institute & IAS)\n\nOrganizers: Thomas Grimm\, Harvard CMSA & Utrecht University | Gal Binyamini\, Weizmann Institute & IAS | Bruno Klingler\, Humboldt University\, Berlin & IAS \n  \nSchedule \n(download pdf) \nWednesday Nov. 12\, 2025 \n8:30–8:55 am\nMorning refreshments (Common Room) \n8:55–9:00 am\nIntroductions \n9:00–10:30 am\nLecture\nSpeaker: Gal Binyamini\, Weizmann Institute & IAS\nTitle: O-minimality: finiteness and complexity\nAbstract: O-minimality is a mathematical formalism of “tame geometry”: a geometry where every set has finite geometric complexity. I will give an introduction to o-minimality in general\, and to quantitative variants where one measures the complexity of sets in terms of some natural parameters. I’ll try to focus on the main examples that potentially come up in the interaction with physics\, and describe the state of the art and some conjectures. \n10:30–11:00 am\nBreak \n11:00 am–12:00 pm\nSpeaker: Benny Zak\, Weizmann Institute & IAS\nTitle: Analytic tameness – complex cells\nAbstract: Complex cells are a complex anayltic version of cells from o-minimality\, invented by Binyamini and Novikov. We aim to introduce complex cells\, and demonstrate their usefullness in quantifying the analytic information present in a complex set. If time permits\, we will discuss applications of this theory. \n12:00–1:00 pm\nCatered Lunch (Common Room) \n1:00–2:30 pm\nLecture\nSpeakers: David Prieto and Mick van Vliet\, Utrecht\nTitle: Tameness and Complexity in Physical Theories\nAbstract: We give an introductory overview of recent applications of o-minimality to physics\, focusing on quantum field theories and quantum gravity. In the first part of the lecture we explain how o-minimality makes a first appearance in physical theories when considering amplitudes in quantum field theory. In the second part\, we concentrate on a class of theories where finiteness principles seem to be essential\, namely the quantum field theories which are consistent with quantum gravity. We review some of these finiteness principles and interpret them through the lens of the o-minimal framework. Along the way\, we highlight recent progress in this direction\, as well as open questions to explore in the future. \n2:30–3:00 pm\nBreak with refreshments (Common Room) \n3:00–4:00 pm\nSpeaker: Matilda Delgado\, Max Planck Institute Munich\nTitle: Dualities and the Compactifiability of Moduli Space\nAbstract:  After introducing (self-)dualities in string theory and their action on the field content & spectrum of the theory\, I will present the notion of compactifiability for the moduli space of massless fields as the condition that its volume is finite or grows no faster than Euclidean space. I will argue that compactifiability generically implies the existence of non-trivial dualities by providing evidence from string theory. Moreover\, I will explain how one can connect compactifiability to the condition that the spectrum of objects charged under the duality group transform in a semisimple representation. Finally\, I will provide a bottom-up argument for compactifiability\, and argue that it (at least in supersymmetric cases) can be explained by the finiteness of the number of massless states upon compactification to 1D. Based on arXiv:2412.03640. \n5:00 PM\nMillennium Lecture and Reception: Pierre Deligne (IAS) (Science Center Hall D)\nTitle: What is the Hodge conjecture? \n  \nThursday\, Nov. 13\, 2025 \n8:30–9:00 am\nMorning refreshments (Common Room) \n9:00–10:30 am\nLecture\nSpeaker: Bruno Klingler\, Humboldt University\, Berlin & IAS\nTitle: Tame geometry and Hodge theory\nAbstract: I will give an introduction to applications of o-minimality in complex geometry\, in particular in Hodge theory. \n10:30–11:00 am\nBreak \n11:00 am–12:00 pm\nSpeaker: Cumrun Vafa\, Harvard\nTitle: The Swampland Program \n12:00–1:30 pm\nCatered Lunch (Common Room) \n1:30–2:30 pm\nSpeaker: Gregorio Baldi\, CNRS\, IMJ-PRG & IAS\nTitle: The Hodge locus\nAbstract: We will survey various recent results around the distribution of the Hodge locus of a (mixed) variation of Hodge structures. Various concrete applications to moduli spaces will also be presented. \n2:30–3:00 pm\nBreak with refreshments (Common Room) \n3:00–4:00 pm\nSpeaker: Vijay Balasubramanian\, U Penn\nTitle: Chaos and complexity in quantum dynamics \n4:30–5:30\nDiscussion/Q&A session \n6:30 PM\nDinner: Changsho Restaurant\, 1712 Massachusetts Ave.\, Cambridge\, MA 02138 \n  \nFriday Nov. 14\, 2025 \n8:30–9:00 am\nMorning refreshments (Common Room) \n9:00–10:00 am\nSpeaker: Washington Taylor\, MIT\nTitle: Finiteness\, connectivity\, and the power of fibrations in the Calabi-Yau landscape \n10:00–10:30am\nBreak \n10:30–11:30 am\nSpeaker: Adele Padgett\, Vienna\nTitle: Tameness of multisummable series\nAbstract: There are sophisticated theories of summability that map divergent series solutions of differential or functional equations to solutions that are holomorphic in sector-like domains. Van den Dries and Speissegger proved that functions obtained from real multisummable power series have tame geometric behavior when restricted to the real numbers. It would be desirable to know that these functions are also tame on their whole sector-like domains\, but recently Speissegger and I proved that these functions are in general only tame on part of their domains. I will present this result and discuss the domains on which some examples are tame\, including the Stirling series which appears in the asymptotic expansion of the Gamma function. In this talk\, “tame” means definable in an o-minimal structure. \n11:30 am–1:00 pm\nCatered Lunch (Common Room) \n1:00–2:00 pm\nSpeaker: Raf Cluckers\, Lille\, France\nTitle:  Finiteness and tameness in (non-archimedean) geometry\nAbstract: Non-archimedean geometry work with orders of magnitude rather than with precise measurements. The former works for example with orders of vanishing of functions\, and the latter typically works with real or complex numbers. I will discuss recent progress on non-archimedean tame geometry. I will present analogues of o-minimality\, of Pila-Wilkie’s o-minimal counting results\, and of other finiteness results\, in non-archimedean settings. \n2:00–2:30 pm\nBreak with refreshments (Common Room) \n2:30–3:30 pm\nSpeaker: David Urbanik\, IHES\, France & IAS\nTitle: Degrees of Hodge Loci \n\n    \n  \n 
URL:https://cmsa.fas.harvard.edu/event/geophys/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Conference,Event,Workshop
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Geophys_poster.4-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251112T170000
DTEND;TZID=America/New_York:20251112T180000
DTSTAMP:20260507T001715
CREATED:20250311T134920Z
LAST-MODIFIED:20251201T154039Z
UID:10003658-1762966800-1762970400@cmsa.fas.harvard.edu
SUMMARY:Millennium Prize Problems Lecture - Pierre Deligne: What is the Hodge conjecture?
DESCRIPTION:  \n \nDate: November 12\, 2025 \nTime: 5:00–6:00 pm \nLocation: Harvard Science Center Hall D\, 1 Oxford St.\, Cambridge MA \nSpeaker: Pierre Deligne\, Institute for Advanced Study \nTitle: What is the Hodge conjecture? \nAbstract: The Hodge conjecture is about projective non-singular complex algebraic varieties. It characterizes the cohomology classes coming from algebraic cycles. I will explain these terms\, tell why the conjecture is so hard to attack\, and why we care. \n  \nSeries Pamphlet (pdf) \nRead more about the Hodge Conjecture at the Clay Math website. \nOrganizers: Martin Bridson\, Clay Mathematics Institute | Dan Freed\, Harvard University and CMSA | Mike Hopkins\, Harvard University \n\n                   \n\nMillennium Prize Problems Lecture Series
URL:https://cmsa.fas.harvard.edu/event/clay_111225/
LOCATION:Harvard Science Center Hall D\, 1 Oxford Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Millennium Prize Problems Lecture,Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Deligne_web-ad-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251117T090000
DTEND;TZID=America/New_York:20251119T170000
DTSTAMP:20260507T001715
CREATED:20250502T182846Z
LAST-MODIFIED:20251215T145740Z
UID:10003749-1763370000-1763571600@cmsa.fas.harvard.edu
SUMMARY:Conference on Geometry and Statistics
DESCRIPTION:Conference on Geometry and Statistics \nDates: November 17–19\, 2025 \nLocation: CMSA G10\, 20 Garden Street\, Cambridge MA & via Zoom \n  \nSpeakers \n\nCharles Fefferman\, Princeton University\nStephan Huckemann\, Georg-August Universität Göttingen\nSungkyu Jung\, Seoul National University\nKei Kobayashi\, Keio University\nClément Levrard\, Université de Rennes\nKer-Chau Li\, University of California\, Los Angeles\nRong Ma\, Harvard University\nSteve Marron\, University of North Carolina\nEzra Miller\, Duke University\nHans-Georg Müller\, University of California\, Davis\nWilderich Tuschmann\, Karlsruhe Institute of Technology\nMelanie Weber\, Harvard University\nAndrew Wood\, Australian National University\nHorng-Tzer Yau\, Harvard University\n\nOrganizer: Zhigang Yao\, National University of Singapore \n  \nYoutube Playlist \n  \nSCHEDULE \ndownload pdf \nMonday\, Nov. 17\, 2025 \n9:00–9:25 am\nMorning refreshments \n9:25–9:30 am\nIntroductions \n9:30–10:30 am\nSpeaker: Stephan Huckemann\, Georg-August Universität Göttingen\nTitle: The Probability of the Cut Locus of a Fréchet Mean\nAbstract: We show that the cut locus of a Fréchet mean of a random variable on a connected and complete Riemanian manifold has zero probability\, a result known previously in special cases (Le and Barden\, 2014) and conjectured in general. The proof is based on first order and second order considerations\, where the latter are based on a recent result by Générau (2020) on “Laplacians in the barrier sense”. This generalizes to Fréchet p-means for p > 2. The former allow also to rule out stickiness on Riemannian manifolds\, and for generalization to 1 <= p < 2\, with a conjecture. We close with discussing and conjecturing extensions to noncomplete manifolds and more general metric spaces. This is joint work with Alexander Lytchak. \n\nGénérau\, F. (2020). Laplacian of the distance function on the cut locus on a Riemannian manifold. Nonlinearity 33(8)\, 3928.\nLe\, H. and D. Barden (2014).  On the measure of the cut locus of a Fréchet mean. Bulletin of the London Mathematical Society 46(4)\, 698–708.\nLytchak\, A. and S. F. Huckemann (2025). Zero mass at the cut locus of a Fréchet mean on a Riemannian manifold. arXiv preprint arXiv:2508.00747.\n\n10:30–10:45 am\nbreak \n10:45 am–11:45 am\nSpeaker: Hans-Georg Müller\, University of California\, Davis\nTitle: Conformal Inference for Random Objects\nAbstract: The underlying probability measure of random objects\, i.e.\, metric-space-valued random variables\, can be probed by distance profiles. These are one-dimensional distributions of probability mass falling into balls of increasing radius. In a regression setting with Euclidean covariates X and responses Y that are random objects\, one can consider conditional Fréchet means that can be implemented with Fréchet regression and also conditional distance profiles\, conditioning on X. Conditional distance profiles can then be leveraged to obtain conditional average transport costs\, the expected cost for transporting a fixed conditional distance profile to a randomly selected conditional distance profile. The conditional average transport costs can then be utilized to obtain conditional conformity scores. In conjunction with the split conformal algorithm these scores lead to conditional prediction sets located in the object space with asymptotic conditional validity and attractive finite sample behavior. Based on joint work Hang Zhou (UNC). \n11:45 am–1:15 pm\nLunch (Catered) \n1:15–2:15 pm\nSpeaker: Horng-Tzer Yau\, Harvard\nTitle: Ramanujan property of random regular graphs and delocalization of random band matrices\nAbstract: In this lecture\, we review recent works on random matrices. The first result is about the normalized adjacency matrix of a random $d$-regular graph on $N$ vertices with any fixed degree $d\geq 3$ and denote its eigenvalues as $\lambda_1=d/\sqrt{d-1}\geq \lambda_2\geq\lambda_3\cdots\geq \lambda_N$. We establish the edge universality for random $d$-regular graphs\, namely\, the distributions of $\lambda_2$ and $-\lambda_N$ converge to the Tracy-Widom$_1$ distribution associated with the Gaussian Orthogonal Ensemble. As a consequence\, for sufficiently large $N$\, approximately $69\%$ of $d$-regular graphs on $N$ vertices.\nare Ramanujan\, meaning $\max\{\lambda_2\,|\lambda_N|\}\leq 2$. This resolves a conjecture by Sarnak and Miller-Novikoff-Sabelli\nThe second result concerns $ N \times N$ Hermitian $d$-dimensional random band matrices with band width $W$. In the bulk of the spectrum and in the large $ N $ limit\, we prove that all $ L^2 $- normalized eigenvectors are delocalized in all dimensions under suitable conditions on $W$ and $N$. In addition\, we proved that the eigenvalue statistics are given by those of the Gaussian unitary ensemble. \n2:15–2:45 pm\nbreak with refreshments \n2:45–3:45 pm\nSpeaker: Clément Levrard\, Université de Rennes\nTitle: Optimal reach estimation\nAbstract: The reach of an embedded submanifold\, a notion that dates back to the famous work Curvature measures of H. Federer\, may be understood as a scale under which the submanifold is flat enough so that traditional Euclidean techniques in statistics locally apply\, up to some approximation. I will expose several ways to estimate the reach from sample (on the submanifold)\, some of them being optimal from the point of view of minimax estimation theory. Along the way\, intermediate estimation problems of local and global quantities will arise (curvature estimation\, weak feature size estimation\, distance estimation\, etc.)\, for which various phenomenons can occur from a statistical point of view (different convergence rates\, inconsistency). This will be an opportunity to provide a selective overview of the state of the art on these issues. \n4:30–5:30 pm\nCMSA Colloquium\nSpeaker: Zhigang Yao (National University of Singapore)\nTitle: Interaction of Statistics and Geometry: A New Landscape for Data Science\nAbstract:  Classical statistics views data as real numbers or vectors in Euclidean space\, but modern challenges increasingly involve data with intrinsic geometric structures. A central problem in this direction is manifold fitting\, with origins in H. Whitney’s work of the 1930s. The Geometric Whitney Problems ask: given a set\, when can we construct a smooth 𝑑-dimensional manifold that approximates it\, and how accurately can we estimate it?\nIn this talk\, I will discuss recent progress on manifold fitting and its role in bridging geometry and data science. While many existing methods rely on restrictive assumptions\, the manifold hypothesis—that data often lie near non-Euclidean structures—remains fundamental in modern statistical learning. I will highlight both theoretical insights and algorithmic challenges\, drawing on recent works with\, as well as ongoing research. \nYoutube video \n  \nTuesday\, Nov. 18\, 2025 \n9:00–9:30 am\nMorning refreshments \n9:30–10:30 am\nSpeaker: Charles Fefferman\, Princeton University (via Zoom)\nTitle: Extrinsic and intrinsic manifold learning\, old and new\nAbstract: The talk will include an exposition of the old paper “Testing the manifold hypothesis”\, joint work with S. Mitter and H. Narayanan\, on extrinsic manifold learning (the manifold to be learned is assumed to be embedded in a high-dimensional Euclidean space). The talk will also include a new result on intrinsic manifold learning (the manifold to be learned is not assumed to be embedded\, and the data consist of intrinsic distances corrupted by noise)\, provided the result is proven by the time of the conference. \n10:30–10:45 am\nbreak \n10:45 am–11:45 am\nSpeaker: Steve Marron\, University of North Carolina\nTitle: Data Integration Via Analysis of Manifolds (DIVAM)\nAbstract: A major challenge in the age of Big Data is the integration of disparate data types into a single data analysis. That was tackled by Data Integration Via Analysis of Subspaces (DIVAS) in the context of data blocks measured on a common set of experimental cases. Joint variation was defined in terms of modes of variation having identical scores across data blocks. DIVAS allowed mathematically rigorous formulation of individual variation within each data block in terms of individual modes. The goal of DIVAM is to intrinsically extend the DIVAS approach to data objects lying in manifolds\, such as shape data. \n11:45 am–1:15 pm\nLunch Break \n1:15–2:15 pm\nSpeaker: Ker-Chau Li\, University of California\, Los Angeles\nTitle: Investigation of Data clouds: From Galton’s Ellipses to Explainable AI (XAI)\, modeling or molding?\nAbstract: Francis Galton’s seminal 1886 visualization of regression toward the mean in trait inheritance is arguably the first and most influential example of geometric thinking applied to statistical modeling. The pioneering geometric insight driving Galton’s use of elliptical contours to discover the bivariate normal distribution laid down the foundation for classic multivariate analysis (e.g.\, PCA\, canonical correlation) and profoundly impacts modern methods like diffusion models.\nStatistical models\, particularly those based on parsimony\, are effective for characterizing data distribution and facilitating scientific rule induction. However\, the rise of unstructured big data (like images) has challenged these parsimonious approaches\, necessitating the use of deep learning models. These models\, containing billions of parameters\, sacrifice transparency to excel in prediction. Seeking solutions to this “black-box” dilemma is now the heart of Explainable AI (XAI).\nLeveraging the simplicity of elementary geometric concepts\, this talk will present a new path toward interpretable and parsimonious XAI. Unstructured big data is highly plastic. Our approach moves beyond the standard data modeling perspective—which answers what the data is—and introduces a novel data molding perspective. This shift is key to unlocking the full potential of data’s plasticity\, allowing us to effectively answer the crucial question: what the data can be used for.\nI will first discuss a connection between manifold learning and my earlier works\, helical confounding and liquid association. I will then turn to the data molding perspective and present two novel notions: mold-compliance and artificial-trait configurative-generation (ATCG). These notions guide our recent efforts in formulating novel algorithms for image data investigation\, addressing issues like prediction validity and within-class heterogeneity. Data molding entails a dramatically different feature space extraction\, which consequently shifts the subsequent investigation on the data clouds from out-of-distribution (OOD) to mold-violation\, and from UMAP clustering to ATCG-induced hierarchical clustering. \n2:15–2:45 pm\nbreak with refreshments \n2:45–3:45 pm\nSpeaker: Andrew Wood\, Australian National University\nTitle: Empirical likelihood methods for Fréchet means on open books\nAbstract: The open book is a simple example of a stratified space that captures some (but not all) of the properties of stratified spaces. Central limit theory for open books plus relevant background is given by Hotz et al. (2013\, Annals of Applied Probability). In this talk I will describe some basic inference procedures for Fréchet means in open books based on empirical likelihood (Owen\, book\, 2001). Empirical likelihood (EL) is a type of nonparametric likelihood that can be useful for many types of data\, including manifold-valued data and data from stratified spaces. An EL approach to basic inference for Fréchet means will be described. In particular\, it will be shown how the non-regularity in the geometry of open books can result in non-regular behaviour in Wilks’s theorem (i.e. the large sample likelihood ratio test). The talk will also discuss difficulties in extending the EL inference theory from open books to more general stratified spaces\, where the difference in dimension of adjacent strata can be 2 or more. For discussion of more general stratified spaces than open books\, see the orthant spaces discussed in Barden and Le (2018\, Proc of London Math Society) and the general stratified space setting considered by Mattingly et al. (2023\, arxiv). \n3:45–4:00 pm\nbreak \n4:00–5:00 pm\nSpeaker: Wilderich Tuschmann\, Karlsruhe Institute of Technology\nTitle: A Spectator’s Perspective on the Manifold Hypothesis\nAbstract: At its core\, the Manifold Hypothesis asserts that real-world\, high-dimensional data is not uniformly or randomly distributed throughout its high-dimensional “ambient” space\, but concentrated on or near a low-dimensional manifold (or a collection of manifolds) embedded within that high-dimensional ambient space.\nIn my talk\, I will discuss reasons and facts that speak for as well as against this hypothesis and also address geometric alternatives. \n  \nWednesday\, Nov. 19\, 2025 \n9:00–9:30 am\nMorning refreshments \n9:30–10:30 am\nSpeaker: Melanie Weber\, Harvard University\nTitle: Ricci Curvature\, Ricci Flow\, and the Geometry of Learning\nAbstract: Geometric structure in data plays a crucial role in machine learning. In this talk\, we study this observation through the lens of Ricci curvature and its associated Ricci flow. We start by reviewing a discrete notion of Ricci curvature introduced by Ollivier and the geometric flow that it induces. We further discuss the relationship between discrete Ricci curvature and its continuous counterpart via discrete-to-continuum consistency results\, which imply that discrete Ricci curvature can provably characterize the geometry of a data manifold based on a finite sample. This provides a theoretical foundation for several applications of discrete Ricci curvature in machine learning\, two of which we discuss in the remainder of this talk. First\, we analyze learned feature representations in deep neural networks and show that they transform during training in ways that closely resemble a discrete Ricci flow. Our analysis reveals that nonlinear activations shape class separability and suggests geometry-informed training principles such as early stopping and depth selection. Second\, we turn to deep learning on graphs\, where we address representational limitations of state of the art graph neural networks through curvature-based data augmentations. We show that augmenting input graphs with geometric information provably increases the representational power of such models and yields performance gains in practice. \n10:30–10:45 am\nbreak \n10:45 am–11:45 am\nSpeaker: Ezra Miller\, Duke University\nTitle: Extracting bar lengths from multiparameter persistent homology\nAbstract: Persistent homology in one parameter can be summarized using bar codes or persistence diagrams\, which are elementary gadgets with many features amenable to vectorization and hence statistical analysis. For example\, early work with Bendich\, Marron\, Pieloch\, and Skwerer showed how to extract meaningful statistics from the top 100 bar lengths in persistent homology summaries of brain arteries. The story for persistent homology with multiple parameters\, on the other hand\, is still developing. Although it has the potential to be much more flexible and informative\, multipersistence has structural issues that present fundamental mathematical challenges. There is no consensus on what might be meant by a “bar”\, let alone “the top 100 bar lengths”. This talk recalls the basics of single and multiparameter persistent homology and discusses some of the mathematical issues\, including obstacles and potential routes forward. \n11:45 am–1:15 pm\nLunch Break \n1:15–2:15 pm\nSpeaker: Kei Kobayashi\, Keio University\nTitle: Metric Transformations of Data Spaces: Curvature Control and Related Developments\nAbstract: We present our proposed method of increasing the accuracy of data analysis by means of two transformations of the metric of the data space. The first transformation is based on the curve length defined by the integral of the power of the density function\, which can be computed approximately using an empirical graph; the second transformation can be interpreted as the extrinsic distance when the data space is embedded in a metric cone. The advantage of both distance transformations is that the hyperparameters allow the curvature to be monotonically transformed in a specific sense. Some statistical applications of these transformations and theoretical justifications are presented. Detailed analyses of the geodesics obtained by this method for several simple probability distributions will also be presented. The main part of this work is based on joint works with Henry P. Wynn. \n2:15–2:45 pm\nbreak with refreshments \n2:45–3:45 pm\nSpeaker: Sungkyu Jung\, Seoul National University\nTitle: Generalized Frechet means with random minimizing domains and its strong consistency\nAbstract: In this talk\, I will discuss a novel extension of Frechet means\, referred to as generalized  Frechet  means\, as a comprehensive framework for describing the characteristics of random elements. The generalized Frechet mean is defined as the minimizer of a cost function\, and the framework encompasses various extensions of Frechet means that have appeared in the literature. The most distinctive feature of the proposed framework is that it allows the domain of minimization for the empirical generalized Frechet means to be random and different from that of its population counterpart. This flexibility broadens the applicability of the Frechet mean framework to various statistical scenarios\, including sequential dimension reduction for non-Euclidean data. We establish a strong consistency theorem for generalized Frechet means. Applications such as verifying the consistency of principal geodesic analysis on the hypersphere\, compositional principal component analysis on the composition space\, and k-medoids clustering for data on a metric space will be discussed. \n3:45–4:00 pm\nbreak \n4:00–5:00 pm\nSpeaker: Rong Ma\, Harvard University\nTitle: Modern Nonlinear Embedding Methods Unpacked\nAbstract: Learning and representing low-dimensional structures from noisy\, high-dimensional data is a cornerstone of modern data science. Stochastic neighbor embedding algorithms\, a family of nonlinear dimensionality reduction and data visualization methods\, with t-SNE and UMAP as two leading examples\, have become very popular in recent years. Yet despite their wide applications\, these methods remain subject to points of debate\, including limited theoretical understanding\, ambiguous interpretations\, and sensitivity to tuning parameters. In this talk\, I will present our recent efforts to decipher and improve these nonlinear embedding approaches. Our key results include a rigorous theoretical framework that uncovers the intrinsic mechanisms\, large-sample limits\, and fundamental principles underlying these algorithms; a set of theory-informed practical guidelines for their principled use in trustworthy biological discovery; and a collection of new algorithms that address current limitations and improve performance in areas such as bias reduction and stability. Throughout the talk\, I will highlight how these advances not only deepen our theoretical understanding but also open new avenues for scientific discovery.
URL:https://cmsa.fas.harvard.edu/event/geostat_2025/
LOCATION:CMSA 20 Garden Street Cambridge\, Massachusetts 02138 United States
CATEGORIES:Conference
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Geostat.3-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251203T170000
DTEND;TZID=America/New_York:20251203T180000
DTSTAMP:20260507T001715
CREATED:20250409T160258Z
LAST-MODIFIED:20251205T171720Z
UID:10003659-1764781200-1764784800@cmsa.fas.harvard.edu
SUMMARY:Millennium Prize Problems Lecture - Madhu Sudan: P vs NP Problem
DESCRIPTION:Pamphlet (pdf) \nSlides (pdf) \nDate: December 3\, 2025 \nTime: 5:00–6:00 pm \nLocation: Harvard Science Center Hall D\, 1 Oxford St.\, Cambridge MA \nSpeaker: Madhu Sudan\, Harvard University \nTitle: The P vs. NP problem: An Existential Question for Mathematics \nAt the beginning of the twentieth century\, in response to questions raised by Hilbert\, illustrious mathematicians such as Godel\, Church and Turing formalized the notion of theorems and proofs. Proofs were automatically verifiable while theorems are logical propositions for which proofs exist. The formal definition of a computer\, a definition that had strong influence on the later development of the technology\, was a by-product of the effort to define the phrase “automatically verifiable”! \nWhile the resulting theory had major implications already\, one notion was however missing in the early definitions. Proofs were meant to be easily verifiable\, while determining the truth of a proposition/conjecture (arguably a core task of mathematics) was not necessarily so. But what is “easiness” and how is it to be defined? While this was already hinted at by Godel in the 50s\, the notion was finally formalized in seminal works of Cook\, Levin and Karp in the early 70s. Central notions here included the adoption of the notion that polynomial time algorithms are (the only) tractable ones\, and the realization that algorithms seeking to remove the existential quantifier in the definition of a “theorem” lead naively to exponential time algorithms. But are there no sophisticated algorithms to search for proofs? This is the profound “Is P = NP?” question. \nIn this talk we will introduce the question and explain implications of resolutions of this question to the modern computing infrastructure\, to mathematics and other sciences. We will briefly describe the state of progress on this question and recent progress on weaker forms of this question. Finally we will also aim to connect this question\, and why one may believe that P != NP (proof search can not be automated) even in the face of accumulating evidence on the ability of computers to solve more and more complex mathematical problems\, which seem to implement brute force search in less than polynomial time. \n  \nRead more about the P vs NP Problem at the Clay Math website. \n  \nOrganizers: Martin Bridson\, Clay Mathematics Institute | Dan Freed\, Harvard University and CMSA | Mike Hopkins\, Harvard University \n\n                   \n\nMillennium Prize Problems Lecture Series
URL:https://cmsa.fas.harvard.edu/event/clay_12325/
LOCATION:Harvard Science Center Hall D\, 1 Oxford Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Millennium Prize Problems Lecture,Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Sudan_web-ad_CROP-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260204T170000
DTEND;TZID=America/New_York:20260204T180000
DTSTAMP:20260507T001715
CREATED:20250409T160357Z
LAST-MODIFIED:20260210T204515Z
UID:10003723-1770224400-1770228000@cmsa.fas.harvard.edu
SUMMARY:Millennium Prize Problems Lecture - Barry Mazur: About the Birch and Swinnerton–Dyer Conjecture
DESCRIPTION:Date: February 4\, 2026 \nTime: 5:00–6:00 pm \nLocation: Harvard Science Center Hall C\, 1 Oxford St.\, Cambridge MA \nSpeaker: Barry Mazur\, Harvard University \nTitle: About the Birch and Swinnerton–Dyer Conjecture \nAbstract: \nIn the 1950s Bryan Birch and Peter Swinnerton–Dyer made computations that suggested a striking connection between a basic global invariant of an elliptic curve E over the field of rational numbers (namely\, the rank of its group of rational points) and certain asymptotics of its local arithmetic invariants (i.e.\, the number of its rational points over finite fields). \nThis initial observation has evolved into their conjecture. My lecture will be an introduction to the general ideas behind its ever-expanding development. \nRead more about the Birch and Swinnerton–Dyer Conjecture at the Clay Math website. \n  \nOrganizers: Martin Bridson\, Clay Mathematics Institute | Dan Freed\, Harvard University and CMSA | Mike Hopkins\, Harvard University \nBarry Mazur joined the Harvard University faculty in 1959 as a Junior Fellow in the Society of Fellows and advanced through the ranks to become the Gerhard Gade University Professor of Mathematics\, a position he has held since 1998. During his tenure at Harvard\, he has mentored 60 doctoral students and served as a pivotal figure in bridging topology and number theory\, notably through his classification of the possible torsion subgroups of elliptic curves over the rational numbers (Mazur’s torsion theorem)\, which identifies exactly 15 possible finite groups. This theorem\, detailed in his 1977 paper “Modular curves and the Eisenstein ideal\,” provided crucial insights into the Taniyama-Shimura conjecture and laid groundwork for Andrew Wiles’s 1994 proof of Fermat’s Last Theorem. \nHis broader research includes seminal works on étale homotopy theory (co-authored with Michael Artin in 1969)\, the arithmetic moduli of elliptic curves (with Nicholas M. Katz in 1985)\, and the Iwasawa main conjecture (proved with Andrew Wiles in 1984)\, as well as advancements in p-adic L-functions and the formulation of the Fontaine-Mazur conjecture on Galois representations. Mazur’s influence extends to public communication of mathematics; he has authored books like Imagining Numbers (2003)\, exploring historical perspectives on complex numbers. \nAmong his numerous honors\, Mazur received the Cole Prize in Number Theory from the American Mathematical Society in 1982\, the Chauvenet Prize in 1994 for expository writing\, the Leroy P. Steele Prize for Lifetime Achievement in 2000\, and election to the National Academy of Sciences in 1982. In 2011 (presented in 2013)\, he was awarded the National Medal of Science by President Barack Obama for his pioneering work in these fields.Most recently\, in 2022\, he received the Chern Medal from the International Mathematical Union\, recognizing his profound discoveries and mentorship. \n  \n\n                   \n\nMillennium Prize Problems Lecture Series \n 
URL:https://cmsa.fas.harvard.edu/event/clay_2426/
LOCATION:Harvard Science Center Hall D\, 1 Oxford Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Millennium Prize Problems Lecture,Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Mazur_AD.hallc_.web_.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260304T160000
DTEND;TZID=America/New_York:20260304T170000
DTSTAMP:20260507T001715
CREATED:20260108T200326Z
LAST-MODIFIED:20260316T161023Z
UID:10003868-1772640000-1772643600@cmsa.fas.harvard.edu
SUMMARY:2026 Ding Shum Lecture: Sanjeev Arora\, Princeton
DESCRIPTION:2026 Ding Shum Lecture \nDate: March 4\, 2026 \nTime: 4:00 pm \nLocation: Harvard Science Center Hall D & via Zoom Webinar \nSpeaker: Sanjeev Arora\, Princeton \nTitle: How could a Superhuman AI mathematician come about? \n\nAbstract: Can AI systems exceed the capabilities of the human experts who provided their training data? The talk will examine the hypothesis of AI self‑improvement\, involving mechanisms such as synthetic data generation\, reinforcement learning\, and tool‑augmented reasoning with formal verification loops. \nI will also present recent work at Princeton\, including the Gödel Prover V2 for Lean‑based theorem proving and a new inference pipeline that achieved state‑of‑the‑art performance (at the time of evaluation) on IMO‑ProofBench (Advanced) at moderate inference costs ($20–$30 per problem). These will illustrate how AI systems are sometimes able to escape “cognitive wells”—local optima in a model’s reasoning capabilities. While providing evidence for the feasibility of self‑improvement\, they also highlight important hurdles and open questions. \n\n  \n\n \nSanjeev Arora is Charles C. Fitzmorris Professor of Computer Science and Director of Princeton Language and Intelligence\, a unit devoted to research and applications of large AI models. He got his Phd from UC Berkeley in 1994 and has been a faculty member at Princeton since then. He has been awarded the ACM Prize in Computing (2011)\, Fulkerson Prize in Discrete Mathematics (2012)\, Packard Fellowship\, Sloan Fellowship\, and the ACM Doctoral Dissertation Prize. He was a plenary speaker at the International Congress of Mathematicians in 2018 and is a member of the National Academy of Science and American Academy of Arts and Sciences. \n\n\n\n\n\nThis event is made possible by the generous funding of Ding Lei and Harry Shum.\n\n\n 
URL:https://cmsa.fas.harvard.edu/event/2026_dingshum/
LOCATION:Harvard Science Center\, 1 Oxford Street\, Cambridge\, MA\, 02138
CATEGORIES:Ding Shum Lecture,Event,Public Lecture,Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Ding-Shum-2026_hall-d.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260310T084500
DTEND;TZID=America/New_York:20260310T101500
DTSTAMP:20260507T001715
CREATED:20260127T153158Z
LAST-MODIFIED:20260316T161125Z
UID:10003881-1773132300-1773137700@cmsa.fas.harvard.edu
SUMMARY:CMSA/Tsinghua Math-Science Literature Lecture: Martin Bridson: Profinite rigidity: Chasing finite shadows of infinite groups
DESCRIPTION:CMSA/Tsinghua Math-Science Literature Lecture \n \nDate: March 10\, 2026 \nTime: 8:45 – 10:15 am ET \nLocation: Harvard Science Center Hall A\, 1 Oxford Street\, Cambridge MA &  via Zoom Webinar \nSpeaker: Martin Bridson FRS is the Whitehead Professor of Pure Mathematics at Oxford and President of the Clay Mathematics Institute. \nTitle: Profinite rigidity: Chasing finite shadows of infinite groups \nAbstract: There are many situations in geometry or elsewhere in mathematics where it is natural or convenient to explore infinite groups of symmetries via their actions on finite objects. But how hard is it to find these finite manifestations and to what extent does the collection of all such actions determine the infinite group?\nIn this talk\, I will sketch some of the rich history of such problems and then describe some of the significant advances in recent years. \nWe’ll pay particular attention to groups that arise in 3-dimensional geometry and topology. \n  \n\nBeginning in Spring 2020\, the CMSA began hosting a lecture series on literature in the mathematical sciences\, with a focus on significant developments in mathematics that have influenced the discipline\, and the lifetime accomplishments of significant scholars. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/mathscilit2026_mb/
LOCATION:MA
CATEGORIES:Math Science Literature Lecture Series,Public Lecture,Special Lectures
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/Mathlit_Bridson-poster.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260311T170000
DTEND;TZID=America/New_York:20260311T180000
DTSTAMP:20260507T001715
CREATED:20250409T160708Z
LAST-MODIFIED:20260316T161233Z
UID:10003724-1773248400-1773252000@cmsa.fas.harvard.edu
SUMMARY:Millennium Prize Problems Lecture - Javier Gómez-Serrano: Navier-Stokes Existence or Breakdown
DESCRIPTION:Date: March 11\, 2026 \nTime: 5:00–6:00 pm \nLocation: Harvard Science Center Hall C\, 1 Oxford St.\, Cambridge MA & via Zoom Webinar \nSpeaker: Javier Gómez-Serrano\, Brown University \nTitle: Navier-Stokes Existence or Breakdown \nAbstract: The Navier-Stokes equations have been the cornerstone of fluid dynamics for over a century\, accurately describing the motion of viscous fluids such as water and air. However\, despite their fundamental importance to mathematics and physics\, a profound question remains unanswered: do solutions to these equations always exist for all time\, or can they break down and develop singularities (points where the equations lose their validity)? In this Millennium Prize Problems Lecture\, I will explore the current mathematical landscape surrounding the Navier-Stokes and related equations. The talk will discuss the historical context\, the ongoing search for global regularity versus finite-time blowup\, and the latest analytical and computational breakthroughs pushing the boundaries of what we know about fluid behavior. \nRead more about the Navier-Stokes Equation at the Clay Math website. \n  \nOrganizers: Martin Bridson\, Clay Mathematics Institute | Dan Freed\, Harvard University and CMSA | Mike Hopkins\, Harvard University \n\n                   \n\nMillennium Prize Problems Lecture Series
URL:https://cmsa.fas.harvard.edu/event/clay_31126/
LOCATION:Harvard Science Center\, 1 Oxford Street\, Cambridge\, MA\, 02138
CATEGORIES:Millennium Prize Problems Lecture,Special Lectures
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/Gomez-Serrano_web-ad3_crop.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260415T080000
DTEND;TZID=America/New_York:20260416T170000
DTSTAMP:20260507T001715
CREATED:20250502T183823Z
LAST-MODIFIED:20260423T163805Z
UID:10003751-1776240000-1776358800@cmsa.fas.harvard.edu
SUMMARY:Swampland and our Universe
DESCRIPTION:Swampland and our Universe \nDates: April 15–16\, 2026 \nLocation: Harvard CMSA\, Room G10\, 20 Garden Street\, Cambridge MA \nThe swampland program has inspired a range of new ideas in both cosmology and neutrino physics. This workshop brings together experts in neutrino physics\, dark energy\, dark matter\, early-universe cosmology\, and string theory to share insights on these developments and to discuss current and future experimental tests. \nSpeakers \n\nIgnatios Antoniadis\, IAS\, Princeton\nAlek Bedroya\, Princeton\nMike Boylan-Kolchin\, UT Austin\nM.C. Gonzalez-Garcia\, YITP Stony Brook & ICREA U. Barcelona\nMustapha Ishak-Boushaki\, UT Dallas\nMarc Kamionkowski\, Johns Hopkins\nMiguel Montero\, Institute of Theoretical Physics\, Madrid\nGeorges Obied\, U Chicago\nMatt Reece\, Harvard\nTracy Slatyer\, MIT\n\nOrganizers: Luis Anchordoqui (CUNY Lehman College)\, Sonia Paban (Harvard Physics)\, and  Cumrun Vafa (Harvard Physics) \n  \n \n  \n  \nVideos are available on the CMSA Youtube Swampland Playlist \nSchedule\n(download pdf) \nWednesday\, Apr. 15\, 2026 \n8:00–9:00 am\nBreakfast \n9:00–10:00 am\nMarc Kamionkowski\, Johns Hopkins: Dark-matter dynamics and new physics \nAbstract: Galactic halos that are spherical\, stationary\, and composed of collisionless dark matter are easy to describe mathematically. If dark matter decays or interacts or there is some departure from equilibrium or time evolution of the system\, all bets are off. In this case costly N-body simulations are required. If\, however\, one retains the assumption of spherical symmetry\, these systems can be evolved numerically with a far simpler algorithm that is easily coded run in a matter of minutes on a laptop\, rather than a day on a supercomputer. I will describe this approach and illustrate with simulations of self-interacting dark matter\, decaying dark matter (with and without anisotropic velocity distributions\, supermassive-black-hole growth\, tidal stripping\, mixed SIDM/CDM models. Come prepared with your own non-standard dark-matter model; we’ll see if we can simulate it during the talk! \n10:00–10:30 am\nCoffee Break \n10:30–11:30 am\nTracy Slatyer\, MIT: What (more) the CMB can teach us about dark matter \nAbstract: Observations of the cosmic microwave background have already provided critical evidence for dark matter\, but can also stringently constrain a range of dark matter properties. I will outline CMB constraints on dark matter properties based on purely gravitational effects\, and then discuss in more detail how both CMB anisotropies and the blackbody spectrum can be used to test dark matter interactions with the Standard Model. \n11:30 am–1:00 pm\nLunch Break (catered) \n1:00–2:00 pm\nAlek Bedroya\, Princeton: How Quantum Gravity Constrains Physics on the Largest Length Scales \nAbstract: I will review the hierarchy of energy scales in quantum gravity\, from the Hubble scale in the IR to the quantum-gravity cutoff and the Planck scale in the UV\, and emphasize the nontrivial UV/IR relations that connect them. I will discuss conjectures constraining scalar potentials and explain how they are related to the behavior of the other energy scales\, together with bottom-up arguments based on general principles of quantum gravity such as holography. In particular\, I will explain how well-motivated holographic assumptions lead to strong restrictions on scalar potentials\, ruling out parametrically prolonged accelerated expansion for positive potentials and obstructing parametric scale separation for negative potentials associated with AdS vacua. Title: How Quantum Gravity Constrains Physics on the Largest Length Scales\nAbstract: I will review the hierarchy of energy scales in quantum gravity\, from the Hubble scale in the IR to the quantum-gravity cutoff and the Planck scale in the UV\, and emphasize the nontrivial UV/IR relations that connect them. I will discuss conjectures constraining scalar potentials and explain how they are related to the behavior of the other energy scales\, together with bottom-up arguments based on general principles of quantum gravity such as holography. In particular\, I will explain how well-motivated holographic assumptions lead to strong restrictions on scalar potentials\, ruling out parametrically prolonged accelerated expansion for positive potentials and obstructing parametric scale separation for negative potentials associated with AdS vacua. \n2:00–2:30 pm\nCoffee Break \n2:30–3:30 pm\nMustapha Ishak-Boushaki\, UT Dallas: Persistent and serious challenge to the ΛCDM throne: Evidence for dynamical dark energy rising from combinations of different types of datasets \nAbstract: We derive multiple constraints on dark energy and compare dynamical dark energy models with a time-varying equation of state (w0waCDM) versus a cosmological constant model (LCDM). We use Baryon Acoustic Oscillation (BAO) from DESI and DES\, Cosmic Microwave Background from Planck with and without lensing from Planck and ACT (noted CMBL and CMB\, respectively)\, supernovae(SN)\, and cross-correlations between galaxy positions and galaxy lensing from DES. We use pairs or triplets of datasets where we exclude one type of dataset each time and categorize them as “NO SN”\, “NO CMB” and “NO BAO” combinations. In all cases\, we find that the combinations favor the w0waCDM model over LCDM\, with significance ranging from 2.0 to 3.0-sigma. The persistence of this pattern across various dataset combinations even when any of the datasets is excluded supports an overall validation of this trending result regardless of any specific dataset. Next\, we use larger combinations of these datasets after verifying their mutual consistency within the w0waCDM model. We find combinations that give robust significance levels\, with DESI+DESY6BAO+CMBL+SN giving 3.4-sigma. In sum\, while we need to remain cautious\, the trend and pattern of these results beyond any single type of dataset and their associated systematics presents a compelling overall portrait not in favor of the LCDM and constitutes a serious challenge to the model’s reign. A few other cosmological results will be provided. \n3:30–4:00 pm\nCoffee Break \n4:00–5:00 pm\nGeorges Obied\, U Chicago: The Dark Dimension and its interplay with DESI data \nAbstract: In this talk\, I will discuss the motivation for considering an extra mesoscopic Dark Dimension of length l ~ 1 – 10 microns\, taking into account theoretical and observational arguments. I will then talk about cosmological aspects of the Dark Dimension. In particular this scenario leads\, by the universal coupling of the Standard Model sector to bulk gravitons\, to massive spin 2 KK excitations of the graviton in the Dark Dimension (the “dark gravitons”) as an unavoidable dark matter candidate. Observations allow such an extra dimension of size in the micron range. Finally\, I will discuss how this scenario can naturally accommodate features recently observed by the DESI survey such as an effective dark energy equation of state that is smaller than -1. \n   \nThursday\, Apr. 16\, 2026 \n8:00–8:30 am\nBreakfast \n8:30–9:30 am\nMC Gonzalez-Garcia\, YITP Stony Brook & ICREA U. Barcelona: Massive Neutrinos in 2026: What we know\, what we do not know (yet?)\, and what we do not understand \nAbstract: In this talk I will present an update of the current understanding (and some not understanding) of the neutrino masses and the lepton mixing and some other minimal SM extensions as derived from direct scrutiny of the results of neutrino flavour oscillation experiments\, some other laboratory probes\, and the cosmos. \n9:30–10:00 am\nCoffee Break \n10:00–11:00 am\nMiguel Montero\, IFT\, Madrid: Neutrinos and B-L symmetry in the Dark Dimension scenario \nAbstract: The Dark Dimension proposes the existe of a micrometer-sized large extra dimension\, whose size is tied to the observed small vacuum energy. I will review the scenario\, and then discuss how to embed the B-L global symmetry of the SM\, focusing on one possibility which leads to an explanation of the observed coincidence between neutrino mass scale and the  vacuum energy scale\, while leading to 3 light species of right-handed neutrinos. I will also briefly discuss potential opportunities for detection of the resulting neutrino oscillations. \n11:00–11:30 am\nCoffee Break \n11:30 am–12:30 pm\nIgnatios Antoniadis\, IAS\, Princeton: Searching for the dark dimension in neutrino experiments \nAbstract: Micron size extra dimensions offer a possibility to explain the smallness of neutrino masses if the right-handed neutrino propagates in the higher dimensional bulk. I will discuss the theoretical framework and the experimental signatures of this proposal in present and future experiments of KATRIN prototype\, aiming to measure the magnitude of neutrino masses and to search for extra sterile-type species. \n12:30–1:30 pm\nLunch Break (catered) \n1:30–2:30 pm\nMike Boylan-Kolchin\, UT Austin: Galaxies as Tracers of the Matter Density Field \nAbstract: Galaxy formation is often (rightly) thought of as involving a complex interplay of messy astrophysical processes\, but it also traces the nonlinear evolution of the matter density in the Universe. Remarkably\, it appears that properties of this nonlinear field are intimately connected to properties of the initial linear fluctuations and some basic physics of dark matter interactions. I will explore some of these connections\, with applications that include the surprisingly fast evolution of early galaxy formation as revealed by JWST and properties of the lowest-mass dark matter clumps capable of hosting galaxies in the local Universe.\n2:30–3:00 pm\nCoffee Break \n3:00–4:00 pm\nMatt Reece\, Harvard: Axions from String Theory\, and String Theory from Axions \nAbstract: String theory compactifications contain the right ingredients to produce axion fields that might solve the Strong CP problem or contribute to dark matter or dynamical dark energy in our universe. After briefly reviewing some of these ingredients\, I will frame the inverse question: suppose that an axion is discovered\, and its decay constant is measured in an experiment. Could this help us to locate ourselves in the string landscape? In particular\, I will discuss how an axion could give us clues about the fundamental string scale and the scale of supersymmetry breaking. \n  \n  \n  \n  \n 
URL:https://cmsa.fas.harvard.edu/event/swampland2026/
LOCATION:CMSA 20 Garden Street Cambridge\, Massachusetts 02138 United States
CATEGORIES:Workshop
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/swampland_2026.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260415T170000
DTEND;TZID=America/New_York:20260415T180000
DTSTAMP:20260507T001715
CREATED:20250409T160808Z
LAST-MODIFIED:20260423T155210Z
UID:10003725-1776272400-1776276000@cmsa.fas.harvard.edu
SUMMARY:Millennium Prize Problems Lecture - Peter Sarnak: Riemann Hypothesis
DESCRIPTION:  \n \nDate: April 15\, 2026 \nTime: 5:00–6:00 pm \nLocation: Harvard Science Center Hall C\, 1 Oxford St.\, Cambridge MA \nSpeaker: Peter Sarnak\, Institute for Advanced Study \nTitle: The Riemann Hypothesis \nAbstract: After reviewing the hypothesis as put forth by Riemann we discuss its generalizations and analogues. We highlight a few of their implications and workarounds\, and probing their truths. \nRead more about the Riemann Hypothesis at the Clay Math website. \nOrganizers: Martin Bridson\, Clay Mathematics Institute | Dan Freed\, Harvard University and CMSA | Mike Hopkins\, Harvard University \n  \n\n                   \n\nMillennium Prize Problems Lecture Series
URL:https://cmsa.fas.harvard.edu/event/clay_41526/
LOCATION:Harvard Science Center\, 1 Oxford Street\, Cambridge\, MA\, 02138
CATEGORIES:Millennium Prize Problems Lecture,Special Lectures
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/Sarnak_web-ad.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260422T090000
DTEND;TZID=America/New_York:20260422T103000
DTSTAMP:20260507T001715
CREATED:20260130T191058Z
LAST-MODIFIED:20260430T205709Z
UID:10003887-1776848400-1776853800@cmsa.fas.harvard.edu
SUMMARY:CMSA/Tsinghua Math-Science Literature Lecture: Nicolai Reshetikhin (Tsinghua): Asymptotic representation theory
DESCRIPTION:CMSA/Tsinghua Math-Science Literature Lecture \nDate: April 22\, 2026 \nTime: 9:00 – 10:30 am ET \nLocation: via Zoom Webinar \nSpeaker: Nicolai Reshetikhin\, Yau Mathematical Sciences Center\, Tsinghua University \nTitle: Asymptotic representation theory \nAbstract: Loosely speaking asymptotic representation theory studies representations of “large” groups or algebras. One of the first results in this direction is the study of Plancherel measures on the symmetric group $S_N$ in the limit $N\to \infty$ by Vershik and Kerov and Logan and Shepp. The first part of the talk will be an overview of results on statistics of irreducible representations in large tensor products. Then we focus on more modern results on statistics of tilting and projective modules in large tensor products and on how some problems in asymptotic representation theory are related to dimer models in statistical mechanics. \n\nBeginning in Spring 2020\, the CMSA began hosting a lecture series on literature in the mathematical sciences\, with a focus on significant developments in mathematics that have influenced the discipline\, and the lifetime accomplishments of significant scholars. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/mathscilit2026_nr/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Math Science Literature Lecture Series,Public Lecture,Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Mathlit_Reshetikhin.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260423T160000
DTEND;TZID=America/New_York:20260423T170000
DTSTAMP:20260507T001715
CREATED:20251006T173927Z
LAST-MODIFIED:20260423T155101Z
UID:10003806-1776960000-1776963600@cmsa.fas.harvard.edu
SUMMARY:Sixth Annual Yip Lecture | Regina Barzilay\, MIT: Can machine learning methods design drugs?
DESCRIPTION:Sixth Annual Yip Lecture \nDate: April 23\, 2026 \nTime: 4:00–5:00 pm ET \nLocation: Harvard Science Center Hall A & via Zoom Webinar \nSpeaker: Regina Barzilay\, MIT \nTitle: Can ML methods design drugs? \nAbstract: Today\, life sciences are driven by prohibitively expensive wet lab experimentations\, which limit the pace and scope of discovery. This talk focuses on AI algorithms that enable in-silico modeling of biological processes. Specifically\, I will focus on algorithms for molecular and cellular modeling. I will highlight several successful examples where these algorithms have already transformed drug discovery. In the second part of the talk\, I want to focus on problems where current methods fail to deliver as expected\, motivating the need for algorithmic innovations. \nIn-person registration \nWebinar registration \n  \nRegina Barzilay is a School of Engineering Distinguished Professor for AI and Health in the Department of Electrical Engineering and Computer Science (EECS) at MIT. Since 2018\, she has been the AI faculty lead for the MIT Jameel Clinic and a member of MIT CSAIL. \nShe is a member of three national academies\, including the National Academy of Engineering\, the National Academy of Medicine\, and the American Academy of Arts & Sciences. \nShe is also a recipient of various awards\, including a 2017 MacArthur fellowship “Genius Grant.” In 2020\, she was awarded the Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity. More recently\, she has been recognized in the 2025 TIME100 AI List and awarded with the IEEE Frances E. Allen Medal for her development of innovative machine learning algorithms that have significantly advanced human language technology and transformed medical diagnostics and drug discovery. \nShe completed her PhD in Computer Science from Columbia University\, and spent a year as a postdoc at Cornell University. Barzilay received her undergraduate degree from Ben-Gurion University of the Negev\, Israel. \nThe Yip Lecture takes place thanks to the support of Dr. Shing-Yiu Yip. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/yip-2026/
LOCATION:MA
CATEGORIES:Public Lecture,Special Lectures,Yip Lecture Series
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/yip_2026_final.2.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260518T090000
DTEND;TZID=America/New_York:20260522T170000
DTSTAMP:20260507T001715
CREATED:20250623T220157Z
LAST-MODIFIED:20260428T153035Z
UID:10003754-1779094800-1779469200@cmsa.fas.harvard.edu
SUMMARY:Workshop on Calabi-Yau metrics and optimal transport
DESCRIPTION:Workshop on Calabi-Yau metrics and optimal transport \nDates: May 18–22\, 2026 \nLocation: Harvard CMSA\, 20 Garden Street\, Cambridge MA \nRecent advances in the study of Calabi-Yau metrics have revealed an interesting connection with optimal transport\, and the regularity theory for optimal transport is expected to play an increasingly important role in the study of Kähler geometry. The goal of this workshop is to bring together the optimal transport and complex geometry communities to investigate problems arising from these exciting developments. \nLimited support may be available for approved postdocs and early career applicants. The application form can be found at: https://forms.gle/1zxTEKhZyz4TPfSY6 \n  \nRegister to attend in-person \nRegister for Zoom Webinar \n  \nMinicourse Speakers \n\nRobert McCann\, University of Toronto\nYang Li\, Cambridge University\n\nWorkshop Speakers \n\nRolf Andreasson\, Chalmers University\, Sweden\nBenjy Firester\, MIT\nJakob Hultgren\, Umea University\, Sweden\nYoung-Heon Kim\, University of British Columbia\nNam Le\, Indiana University\nJiakun Liu\, University of Sydney\nDuong H. Phong\, Columbia University\nArghya Rakshit\, University of Toronto\nGabor Szekelyhidi\, Northwestern University\nYueqiao Wu\, Johns Hopkins University\n\nOrganizers: \n\nTristan Collins\, University of Toronto\nMattias Jonsson\, University of Michigan\nConnor Mooney\, University of California\, Irvine\nFreid Tong\, University of Toronto\n\n  \n  \nSchedule (subject to change) \nMonday\, May 18\, 2026 \n9:00–9:30 am\nBreakfast \n9:30–10:45 am\nTutorial: Yang Li\, Cambridge University (via Zoom Webinar) \n10:45–11:15 am\nBreak \n11:15 am–12:30 pm\nTutorial: Robert McCann\, University of Toronto\nTitle: A geometric approach to apriori estimates for optimal transport maps\nAbstract: A key inequality which underpins the regularity theory of optimal transport for costs satisfying the Ma-Trudinger-Wang condition is the Pogorelov second derivative bound. This translates to an a priori interior modulus of the differential estimate for smooth optimal maps. We describe a new derivation of this estimate with Brendle\, Leger and Rankin which relies in part on Kim\, McCann\, and Warren’s observation that the graph of an optimal map becomes a volume maximizing non-timelike submanifold when the product of the source and target domains is endowed with a suitable pseudo-Riemannian geometry that combines both the marginal densities and the cost. This unexpected links optimal transport to the plateau problem in geometry with split signature\, and shows the key difficulty is showing the maximizing non-timelike submanifold is in fact (uniformly) spacelike. J. Reine Angew. Math. 817 (2024) 251-266 doi.org/10.1515/crelle-2024-0071 arXiv 2311.10208 \n12:30–2:00 pm\nLunch (catered) \n2:00–3:15 pm\nTalk: Nam Le\, Indiana University\nTitle: Variational approach to degenerate Monge-Ampère equations with mixed measures and monotonicity\nAbstract: In this talk\, we will discuss the solvability and uniqueness for several degenerate Monge-Ampère equations including the Monge-Ampère eigenvalue problem in real Euclidean spaces that involve singular Borel measures. Our approach systematically analyzes the Monge-Ampère energy from the variational point of view and appropriately exploits monotonicity arguments. We will examine several essential tools: the mixed Monge-Ampère measure\, Aleksandrov-Blocki-Jerison type maximum principles\, convex envelope\, comparison principles for subcritical equations\, and integration by parts whose failure leads to symmetry breaking and nonuniqueness phenomena. \n3:15–3:45 pm\nBreak \n3:45–5:00 pm\nTalk: Yueqiao Wu\, Johns Hopkins University \n  \nTuesday\, May 19\, 2026 \n9:00–9:30 am\nBreakfast \n9:30–10:45 am\nTutorial: Robert McCann\, University of Toronto\nTitle: Trading linearity for ellipticity: A low regularity Lorentzian splitting theorem\nAbstract: While Einstein’s theory of gravity is formulated in a smooth setting\, the celebrated singularity theorems of Hawking and Penrose describe many physical situations in which this smoothness must eventually breakdown. It is thus of great interest to study the theory in low regularity settings. In the lecture\, we establish a low regularity splitting theorem by sacrificing linearity of the d’Alembertian to recover ellipticity. We exploit a negative homogeneity $p$-d’Alembert operator for this purpose. The same technique yields a simplified proof of Eschenberg (1988) Galloway (1989) and Newman’s (1990) confirmation of Yau’s (1982) conjecture\, bringing all three Lorentzian splitting results into a framework closer to the Cheeger-Gromoll splitting theorem from Riemannian geometry. Based on joint work with Mathias Braun\, Nicola Gigli\, Argam Ohanyan\, and Clemens Saemann: 1) arXiv 2501.00702 2) arXiv 2408.15968 3) arXiv 2410.12632 4) arXiv 2507.06836 \n10:45–11:15 am\nBreak \n11:15 am–12:30 pm\nTutorial: Yang Li\, Cambridge University (via Zoom Webinar) \n12:30–2:00 pm\nLunch Break \n2:00–3:15 pm\nTalk: Young-Heon Kim\, University of British Columbia \n3:15–3:45 pm\nBreak \n3:45–5:00 pm\nTalk: Duong Phong\, Columbia University \n6:30 pm\nDinner \n  \nWednesday\, May 20\, 2026 \n9:00–9:30 am\nBreakfast \n9:30–10:45 am\nTutorial: Yang Li\, Cambridge University (via Zoom Webinar) \n10:45–11:15 am\nBreak \n11:15 am–12:30 pm\nTutorial: Robert McCann\, University of Toronto\nTitle: The monopolist’s free boundary problem in the plane: an excursion into the economic value of private information\nAbstract: The principal-agent problem is an important paradigm in economic theory for studying the value of private information: the nonlinear pricing problem faced by a monopolist is one example; others include optimal taxation and auction design. For multidimensional spaces of consumers (i.e. agents) and products\, Rochet and Chone (1998) reformulated this problem as a concave maximization over the set of convex functions\, by assuming agent preferences are bilinear in the product and agent parameters. This optimization corresponds mathematically to a convexity-constrained obstacle problem. The solution is divided into multiple regions\, according to the rank of the Hessian of the optimizer.\nIf the monopolists costs grow quadratically with the product type we show that a partially smooth free boundary delineates the region where it becomes efficient to customize products for individual buyers. We give the first complete solution of the problem on square domains\, and discover new transitions from unbunched to targeted and from targeted to blunt bunching as market conditions become more and more favorable to the seller.\nBased on works with Kelvin Shuangjian Zhang\, Cale Rankin\, and Lucas O’Brien in various combinations:\n1) Math. Models Methods Appl. Sci. 34 (2024) 2351-2394; 2) J. Convex Anal. (Rockafellar 90 Issue)\, 32 (2) (2025) 579-584; 3) arXiv 2303.04937; 4) arxiv 2412.15505; 5) arXiv 2603.14100. \n  \nThursday\, May 21\, 2026 \n9:00–9:30 am\nBreakfast \n9:30–10:45 am\nTalk: Gabor Szekelyhidi\, Northwestern University \n10:45–11:15 am\nBreak \n11:15 am–12:30 pm\nTalk: Rolf Andreasson\, Chalmers University\, Sweden\nTitle: Optimal transport between boundaries of dual reflexive polytope\nAbstract: I will present an optimal transport problem between the boundaries of a pair of reflexive polytopes. Under a certain structural condition on its solution\, this problem is related the study of metric degenerations of families of Calabi–Yau hypersurfaces in the corresponding toric Fano variety. A better understanding of such solutions and their regularity would shed light on several aspects of the degeneration and conjectural Gromov–Hausdorff limit\, and I will present some open directions of research. This is based on joint work with Jakob Hultgren\, Mattias Jonsson\, Enrica Mazzon and Nicholas McCleerey. \n12:30–2:00 pm\nLunch Break \n2:00–3:15 pm\nTalk: Jakob Hultgren\, Umea University\, Sweden \n3:15–3:45 pm\nBreak \n3:45–5:00 pm\nTalk: Benjy Firester\, MIT \n  \nFriday\, May 22\, 2026 \n9:00–9:30 am\nBreakfast \n9:30–10:45 am\nTalk: Jiakun Liu\, University of Sydney\nTitle: Free boundary problems in optimal transportation\nAbstract: In this talk\, I will present some recent results on the regularity of free boundaries in optimal transportation\, including higher-order regularity\, global regularity\, and a model case involving multiple targets. These results are based on a series of joint works with Shibing Chen\, Xianduo Wang\, and Xu-Jia Wang. \n10:45–11:15 am\nBreak \n11:15 am–12:30 pm\nTalk: Arghya Rakshit\, University of Toronto\nTitle: Solutions to the Monge–Ampère equation with singular structures\nAbstract: We construct examples of solutions to the Monge–Ampère equation with point masses exhibiting polyhedral singular structures. We further analyze the stability of these singular sets under small perturbations of the data. In addition\, we construct solutions whose Monge–Ampère measure contains a singular component supported on lower-dimensional sets and we study the regularity of such solutions. \n 
URL:https://cmsa.fas.harvard.edu/event/cymetrics/
LOCATION:CMSA 20 Garden Street Cambridge\, Massachusetts 02138 United States
CATEGORIES:Workshop
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/CY-Workshop_2.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260903T090000
DTEND;TZID=America/New_York:20260904T170000
DTSTAMP:20260507T001715
CREATED:20260217T174509Z
LAST-MODIFIED:20260217T174509Z
UID:10003846-1788426000-1788541200@cmsa.fas.harvard.edu
SUMMARY:Big Data Conference 2026
DESCRIPTION:Big Data Conference 2026 \nDates: Sep. 3–4\, 2026 \nLocation: Harvard University CMSA\, 20 Garden Street\, Cambridge & via Zoom \nThe Big Data Conference features speakers from the Harvard community as well as scholars from across the globe\, with talks focusing on computer science\, statistics\, math and physics\, and economics. \nDetails TBA \n 
URL:https://cmsa.fas.harvard.edu/event/bigdata_2026/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Big Data Conference,Conference,Event
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260908T090000
DTEND;TZID=America/New_York:20260911T170000
DTSTAMP:20260507T001715
CREATED:20260217T174544Z
LAST-MODIFIED:20260217T174544Z
UID:10003847-1788858000-1789146000@cmsa.fas.harvard.edu
SUMMARY:The Geometry of Machine Learning 2026
DESCRIPTION:The Geometry of Machine Learning 2026 \nDates: September 8–11\, 2026 \nLocation: Harvard CMSA\, Room G10\, 20 Garden Street\, Cambridge MA 02138 \nOrganizers: Michael R. Douglas (CMSA) and Mike Freedman (CMSA) \n  \nDetails TBA \n  \nSupport provided by Logical Intelligence. \n \n  \n 
URL:https://cmsa.fas.harvard.edu/event/gml_2026/
LOCATION:CMSA 20 Garden Street Cambridge\, Massachusetts 02138 United States
CATEGORIES:Conference,Event
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260916T090000
DTEND;TZID=America/New_York:20260916T103000
DTSTAMP:20260507T001715
CREATED:20260422T170335Z
LAST-MODIFIED:20260422T170806Z
UID:10003938-1789549200-1789554600@cmsa.fas.harvard.edu
SUMMARY:CMSA/Tsinghua Math-Science Literature Lecture: Robert Gompf
DESCRIPTION:CMSA/Tsinghua Math-Science Literature Lecture \nDate: September 16\, 2026 \nTime: 9:00 – 10:30 am ET \nLocation: CMSA G10\, 20 Garden Street & via Zoom Webinar \nSpeaker: Robert E. Gompf\, University of Texas\, Austin \n  \n\nBeginning in Spring 2020\, the CMSA began hosting a lecture series on literature in the mathematical sciences\, with a focus on significant developments in mathematics that have influenced the discipline\, and the lifetime accomplishments of significant scholars. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/mathscilit2026_rg/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Math Science Literature Lecture Series,Public Lecture,Special Lectures
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260928T080000
DTEND;TZID=America/New_York:20261002T170000
DTSTAMP:20260507T001715
CREATED:20251027T191925Z
LAST-MODIFIED:20251027T192243Z
UID:10003827-1790582400-1790960400@cmsa.fas.harvard.edu
SUMMARY:Workshop on Lagrangian Floer theory and applications
DESCRIPTION:Workshop on Lagrangian Floer theory and applications \nDates: September 28 – October 2\, 2026 \nLocation: CMSA G10\, 20 Garden St.\, Cambridge MA 02138 \nThis  workshop is part of the Lagrangian Floer theory and applications Program \n  \nOrganizers: Denis Auroux (Harvard)\, Jonny Evans (Lancaster)\, and Chris Woodward (Rutgers) \n  \ndetails tba
URL:https://cmsa.fas.harvard.edu/event/lftworkshop/
LOCATION:CMSA 20 Garden Street Cambridge\, Massachusetts 02138 United States
CATEGORIES:Workshop
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20261106T090000
DTEND;TZID=America/New_York:20261107T170000
DTSTAMP:20260507T001715
CREATED:20260423T160818Z
LAST-MODIFIED:20260429T180437Z
UID:10003936-1793955600-1794070800@cmsa.fas.harvard.edu
SUMMARY:Northeast Conference on Categorical Methods
DESCRIPTION:Northeast Conference on Categorical Methods \n\n\n\n\nThis conference is intended to bring together researchers in all areas of mathematics and mathematical physics whose work involves the use of methods from categorical algebra\, abstract homotopy theory\, and higher category theory. Our goal is to showcase new\, exciting research and offer an avenue for researchers from the Northeast US to discuss their work and collaborate with people in this ever-growing community. \nOrganized by Dan Freed\, Harvard Math & CMSA; Owen Gwilliam\, UMass Amherst; and Lorenzo Riva\, Harvard CMSA
URL:https://cmsa.fas.harvard.edu/event/cm/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Workshop
END:VEVENT
END:VCALENDAR