BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CMSA - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:CMSA
X-ORIGINAL-URL:https://cmsa.fas.harvard.edu
X-WR-CALDESC:Events for CMSA
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250902T161500
DTEND;TZID=America/New_York:20250902T183000
DTSTAMP:20260411T144951
CREATED:20250829T204330Z
LAST-MODIFIED:20250902T170240Z
UID:10003773-1756829700-1756837800@cmsa.fas.harvard.edu
SUMMARY:Fukaya category and gauge theory
DESCRIPTION:Geometry and Quantum Theory Seminar \nSpeaker: Saman Habibi Esfahani\, Harvard CMSA \nTitle: Fukaya category and gauge theory \nAbstract: After setting up some background\, I will discuss the Fukaya $A_\ infty$-category and several instances where it appears in gauge theory\, such as in the study of flat connections on Riemann surfaces\, holomorphic sections of some hyperkähler bundles\, and instantons and holomorphic curves in K3 surfaces. If time permits\, I will also outline potential applications of these ideas to the study of 3-manifolds and manifolds with special holonomy. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/quantumgeo_9225/
LOCATION:Science Center 507\, 1 Oxford Street\, Cambridge\, 02138
CATEGORIES:Geometry and Quantum Theory Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Geometry-Quantum-Theory-9.2.25.edit_-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250903T160000
DTEND;TZID=America/New_York:20250903T173000
DTSTAMP:20260411T144951
CREATED:20250729T195223Z
LAST-MODIFIED:20250805T182154Z
UID:10003758-1756915200-1756920600@cmsa.fas.harvard.edu
SUMMARY:Fall CMSA Welcome Event
DESCRIPTION:Fall CMSA Welcome Event \nDate: September 3\, 2025 \nTime: 4:00 pm \nLocation: CMSA Common Room\, 20 Garden Street\, Cambridge MA \n  \nAll CMSA and Math affiliates are invited. \n 
URL:https://cmsa.fas.harvard.edu/event/welcome925/
LOCATION:CMSA 20 Garden Street Cambridge\, Massachusetts 02138 United States
CATEGORIES:Event
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/CMSA_Wwlecome-2023-IMG_9367.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250908T090000
DTEND;TZID=America/New_York:20250910T170000
DTSTAMP:20260411T144951
CREATED:20250502T174228Z
LAST-MODIFIED:20250909T151806Z
UID:10003660-1757322000-1757523600@cmsa.fas.harvard.edu
SUMMARY:Math and Machine Learning Reunion Workshop
DESCRIPTION:Math and Machine Learning Reunion Workshop \nDates: September 8–10\, 2025 \nLocation: Harvard CMSA\, Room G10\, 20 Garden Street\, Cambridge MA \nMachine learning and AI are increasingly important tools in all fields of research. In the fall of 2024\, the CMSA Mathematics and Machine Learning Program hosted 70 mathematicians and machine learning experts\, ranging from beginners to established leaders in their field\, to explore ML as a research tool for mathematicians\, and mathematical approaches to understanding ML. More than 20 papers came out of projects started and developed during the program. The MML Reunion workshop will be an opportunity for the participants to share their results\, review subsequent developments\, and develop directions for future research. \nRegistration required \nIn-person registration \nZoom meeting registration \n  \nInvited Speakers \n\nAngelica Babei\, Howard University\nGergely Bérczi\, Aarhus University\nJoanna Bieri\, University of Redlands\nGiorgi Butbaia\, University of New Hampshire\nRandy Davila\, RelationalAI\, Rice University\nAlyson Deines\, IDA/CCR La Jolla\nSergei Gukov\, Caltech\nYang-Hui He\, University of Oxford\nMark Hughes\, Brigham Young University\nKyu-Hwan Lee\, University of Connecticut\nEric Mjolsness\, UC Irvine\nMaria Prat Colomer\, Brown University\nSébastien Racanière\, Google DeepMind\nEric Ramos\, Stevens Institute of Technology\nTamara Veenstra\, IDA-CCR La Jolla\n\nOrganizer:Michael Douglas\, CMSA \n\nSchedule \nMonday Sep. 8\, 2025 \n\n\n\n9:00–9:30 am\nMorning refreshments\n\n\n9:30–9:45 am\nIntroductions\n\n\n9:45–10:45 am\nAngelica Babei\, Howard University\nTitle: Predicting Euler factors of elliptic curves\nAbstract: Two non-isogenous elliptic curves will have distinct traces of Frobenius at a large enough prime\, and a finite set of $a_p(E)$ values determines all others. However\, even when enough $a_p(E)$ values are provided to uniquely identify the isogeny class\, no efficient algorithm is known for determining the remaining $a_p(E)$ values from this finite set. Preliminary results show that ML models can learn to predict the next trace of Frobenius with a surprising degree of accuracy from relatively few nearby entries. We investigate some possible reasons for this performance. Based on joint work with François Charton\, Edgar Costa\, Xiaoyu Huang\, Kyu-Hwan Lee\, David Lowry-Duda\, Ashvni Narayanan\, and Alexey Pozdnyakov.\n\n\n10:45–11:00 am\nBreak\n\n\n11:00 am–12:00 pm\nKyu-Hwan Lee\, University of Connecticut\nTitle: Machine learning mutation-acyclicity of quivers\n\n\n12:00–1:30 pm\nLunch\n\n\n1:30–2:30 pm\nGergely Bérczi\, Aarhus University\nTitle: Diffusion Models for Sphere Packings\n\n\n2:30–2:45 pm\nBreak\n\n\n2:45–3:45 pm\nRandy Davila\, RelationalAI\, Rice University\nTitle: Recent Developments in Automated Conjecturing\nAbstract: The dream of a machine capable of generating deep mathematical insight has inspired decades of research—from Fajtlowicz’s Graffiti program in graph theory and chemistry to DeepMind’s neural breakthroughs in knot theory. In this talk\, we briefly trace the evolution of automated conjecturing systems and present recent advances that deepen our understanding of what it means for machines to conjecture—a pursuit long embodied by our system\, TxGraffiti. Building on this legacy\, we introduce a new framework that integrates optimization\, enumeration\, and convex geometric methods with creative heuristics and symbolic translation. This extended system produces not only conjectured inequalities\, but also necessary and sufficient condition statements\, which can then be automatically ranked by IRIS (Inequality Ranking and Inference System) model and translated into Lean 4 for formal verification. The result is a flexible architecture capable of generating precise\, human-readable\, and logically rigorous conjectures with minimal manual intervention.\nWe showcase results across a range of mathematical areas\, including graph theory\, polyhedral theory\, number theory\, and—for the first time—conjectures in string theory\, derived from the dataset of complete intersection Calabi–Yau (CICY) threefolds. Together\, these developments suggest that with the right blend of structure\, strategy\, and aesthetic\, machines can generate conjectures that not only withstand scrutiny but invite it—offering a glimpse into a future where AI contributes meaningfully to the creative process of mathematics.\n\n\n3:45–4:00 pm\nBreak\n\n\n4:00–5:00 pm\nEric Ramos\, Stevens Institute of Technology\nTitle: An AI approach to a conjecture of Erdos\nAbstract: Given a graph G\, its independence sequence is the integral sequence a_1\,a_2\,…\,a_n\, where a_i is the number of independent sets of vertices of size i. In the 90’s Erdos and coauthors showed that this sequence need not be unimodal for general graphs\, but conjectured that it is always unimodal whenever G is a tree. This conjecture was then naturally generalized to claim that the independence sequence of trees should be log concave\, in the sense that a_i^2 is always above a_{i-1}a_{i+1}. This stronger version of the conjecture was shown to hold for all trees of at most 25 vertices. In 2023\, however\, using improved computational power and a considerably more efficient algorithm\, Kadrawi\, Levit\, Yosef\, and Mirzrachi proved that there were exactly two trees on 26 vertices whose independence sequence was not log concave. They also showed how these two examples could be generalized to create two families of trees whose members are all not log concave. Finally\, in early 2025\, Galvin provided a family of trees with the property that for any chosen positive integer k\, there is a member T of the family where log concavity breaks at index alpha(T) – k\, where alph(T) is the independence number of T. Outside of these three families\, not much else was known about what causes log concavity to break.In this presentation\, I will discuss joint work of myself and Shiqi Sun\, where we used the PatternBoost architecture to train a machine to find counter-examples to the log concavity conjecture. We will discuss the successes of this approach – finding tens of thousands of new counter-examples with vertex set sizes varying from 27 to 101 – and some of its fascinating failures.\n\n\n\n  \nTuesday\, Sep. 9\, 2025 \n\n\n\n9:00–9:30 am\nMorning refreshments\n\n\n9:30–10:30 am\nMaria Prat Colomer\, Brown University\nTitle: From PINNs to Computer-Assisted Proofs for Fluid Dynamics\nAbstract: Physics-Informed Neural Networks (PINNs) have emerged as an alternative to traditional numerical methods for solving partial differential equations (PDEs). We apply PINNs to the study of low regularity problems in fluid dynamics\, focusing on the incompressible 2D Euler equations. In particular\, we study V-states\, which are a class of weak\, non-smooth solutions for which the vorticity is the characteristic function of a domain that rotates with constant angular velocity. We have obtained an approximate solution of a limiting V-state using a PINN and we are currently working on a rigourous proof of the existence of a nearby solution through a computer-assisted proof. Our PINN-based numerical approximation significantly improves on traditional methods\, a key factor being the integration of prior mathematical knowledge of the problem to effectively explore the solution space.\n\n\n10:30–11:00 am\nBreak\n\n\n11:00 am–12:00 pm\nSebastian Racaniere\, Google DeepMind\nTitle: Generative models and high dimensional symmetries: the case of Lattice QCD\nAbstract: Applying normalizing flows\, a machine learning technique for mapping distributions\, to Lattice QCD offers a promising route to enhance simulations and overcome limitations of traditional methods like Hybrid Monte Carlo. LQCD aims to compute expectation values of observables from an intractable distribution defined over a lattice of fields. Normalizing flows can learn this complex distribution and generate new configurations\, improving efficiency and addressing challenges such as critical slowing down and topological freezing. Topological freezing\, in particular\, traps simulations in local minima and prevents exploration of the full configuration space\, affecting accuracy. This approach incorporates the symmetries of LQCD through gauge equivariant flows\, leading to successful definitions and good effective sample sizes on smaller lattices. Beyond accelerating configuration generation\, normalizing flows also find application in variance reduction for observable calculation and exploring phenomena at different scales within LQCD. While further research is needed to apply these methods at the scale of state-of-the-art LQCD calculations\, these advancements hold significant potential to improve the accuracy\, efficiency\, and reach of future simulations.\n\n\n12:00–1:30 pm\nLunch break\n\n\n1:30–2:30 pm\nSergei Gukov\, Caltech\nTitle: On sparse reward problems in mathematics\nAbstract: An alternative title for this talk could be “Learning Hardness.” To see why\, we will explore some long-standing open problems in mathematics and examine what makes them hard from a computational perspective. We will argue that\, in many cases\, the difficulty arises from a highly uneven distribution of hardness within families of related problems\, where the truly hard cases lie far out in the tail. We will then discuss how recent advances in AI may provide new tools to tackle these challenges. Based in part on the recent work with A.Shehper\, A.Medina-Mardones\, L.Fagan\, B.Lewandowski\, A.Gruen\, Y.Qiu\, P.Kucharski\, and Z.Wang.\n\n\n2:30–2:45 pm\nBreak\n\n\n2:45–3:45 pm\nAlyson Deines\, IDA-CCR La Jolla; Tamara Veenstra\, IDA-CCR La Jolla; Joanna Bieri\, University of Redlands\nTitle: Machine learning $L$-functions\nAbstract: We study the vanishing order of rational $L$-functions and Maass form $L$-functions from a data scientific perspective. Each $L$-function is represented by finitely many Dirichlet coefficients\, the normalization of which depends on the context. We observe murmurations by averaging over these datasets. For rational $L$-functions\, we find that PCA clusters rational $L$-functions by their vanishing order and record that LDA and neural networks may accurately predict this quantity. For Maass form $L$-functions\, while PCA does not cluster these $L$-functions\, we still find that LDA and neural networks may accurately predict this quantity.\n\n\n3:45–4:00 pm\nBreak\n\n\n4:00–5:00 pm\nMark Hughes\, Brigham Young University\nTitle: Modelling the concordance group via contrastive learning\nAbstract: The concordance group of knots in 3-space is an abelian group formed by the equivalence classes of knots under the relation of concordance\, where two knots are concordant if they are the boundary of a smooth annulus properly embedded in the 4-dimensional product space S^3 x I. Though studied since 1966\, properties of the concordance groups (and even the recognition problem of deciding when a knot is null-concordant\, or slice) are difficult to study. In this talk I will outline ongoing attempts to model the concordance group using contrastive learning. This is joint work with Onkar Singh Gujral.\n\n\n\n  \n  \nWednesday Sep. 10\, 2025 \n\n\n\n9:00–9:30 am\nMorning refreshments\n\n\n9:30–10:30 am\nYang-Hui He\, University of Oxford (Via Zoom)\nTitle: AI for Mathematics: Bottom-up\, Top-Down\, Meta-\nAbstract: We argue how AI can assist mathematics in three ways: theorem-proving\, conjecture formulation\, and language processing. Inspired by initial experiments in geometry and string theory in 2017\, we summarize how this emerging field has grown over the past years\, and show how various machine-learning algorithms can help with pattern detection across disciplines ranging from algebraic geometry to representation theory\, to combinatorics\, and to number theory. At the heart of the programme is the question how does AI help with theoretical discovery\, and the implications for the future of mathematics.\n\n\n10:30–11:00 am\nBreak\n\n\n11:00 am–12:00 pm\nGiorgi Butbaia\, University of New Hampshire\nTitle: Computational String Theory using Machine Learning\nAbstract: Calabi-Yau compactifications of the $E_8\times E_8$ heterotic string provide a promising route to recovering the four-dimensional particle physics described by the Standard Model. While the topology of the Calabi-Yau space determines the overall matter content in the low-energy effective field theory\, further details of the compactification geometry are needed to calculate the normalized physical couplings and masses of elementary particles. In this talk\, we present novel numerical techniques for computing physically normalized Yukawa couplings in a number of heterotic models in the standard embedding using geometric machine learning and equivariant neural networks. We observe that the results produced using these techniques are in excellent agreement with the expected values for certain special cases\, where the answers are known. In the case of the Tian-Yau manifold\, which defines a model with three generations and has $h^{2\,1}>1$\, we provide a first-of-its-kind calculation of the normalized Yukawa couplings. As part of this work\, we have developed a Python library called cymyc\, which streamlines calculation of the Calabi-Yau metric and the Yukawa couplings on arbitrary Calabi-Yau manifolds that are realized as complete intersections and provides a framework for studying the differential geometric properties\, such as the curvature.\n\n\n12:00–1:30 pm\nLunch break\n\n\n1:30–2:30 pm\nEric Mjolsness\, UC Irvine\nTitle: Graph operators for science-applied AI/ML\nAbstract: Scalable\, structured graphs play a central role in mathematical problem definition for scientific applications of artificial intelligence and machine learning. Qualitatively diverse kinds of operators are necessary to bring these graphs to life. Continuous-time processes govern the evolution of spatial graph embeddings and other graph-local differential equation systems\, as well as the flow of probability between locally similar graph structures in a probabilistic Fock space\, according to rules in a dynamical graph grammar (DGG). Both kinds of dynamics have biophysical application eg. to dynamic cytoskeleton\, and both obey graph-centric time-evolution operators in an operator algebra that can be differentiated for learning. On the other hand coarse-scale discrete jumps in graph structure such as global mesh refinement can be modeled with a “graph lineage”: a sequence of sparsely interrelated graphs whose size grows roughly exponentially with level number. Graph lineages permit the definition of substantially more cost-efficient skeletal graph products\, as versions of classic binary graph operators such as the Cartesian product and direct product of graphs\, with analogous but not identical properties. Application to deep neural networks and to multigrid numerical methods are shown.\nThese two graph operator frameworks are interrelated. Further graph lineage operators allow the definition of graph frontier spaces\, accommodating graph grammars and supporting the definition of skeletal graph-graph function spaces. In return\, “confluent” graph grammars e.g. for adaptive mesh generation permit the definition of graph lineages through iteration. I will also sketch the design of compatible AI for Science systems that may exploit DGGs.\nJoint work with Cory Scott and Matthew Hur.\n\n\n2:30–3:00 pm\nBreak\n\n\n3:00–5:00 pm\nPanel and Discussion Group: Jordan Ellenberg\, Tamara Veenstra\, Sébastien Racaniere\, Kyu-Hwan Lee\, Sergei Gukov\n\n\n\n  \n\n  \n  \n 
URL:https://cmsa.fas.harvard.edu/event/mml_2025/
LOCATION:CMSA 20 Garden Street Cambridge\, Massachusetts 02138 United States
CATEGORIES:Event,Workshop
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/MML_Reunion_poster.2.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250909T161500
DTEND;TZID=America/New_York:20250909T183000
DTSTAMP:20260411T144951
CREATED:20250829T204407Z
LAST-MODIFIED:20250908T135742Z
UID:10003774-1757434500-1757442600@cmsa.fas.harvard.edu
SUMMARY:Higher categories of cobordisms
DESCRIPTION:Geometry and Quantum Theory Seminar \nSpeaker: Lorenzo Riva \nTitle: Higher categories of cobordisms \nAbstract: I will give a brief introduction to topological field theories from a higher categorical perspective. After saying a few things about higher categories\, I will define a family of n-categories of bordisms and talk about their universal properties. I will try to squeeze in the canonical example — representations of the 2-dimensional oriented bordism 2-category are separable symmetric Frobenius algebras — and\, time permitting\, talk about my current work. \n 
URL:https://cmsa.fas.harvard.edu/event/quantumgeo_9925/
LOCATION:Science Center 507\, 1 Oxford Street\, Cambridge\, 02138
CATEGORIES:Geometry and Quantum Theory Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Geometry-Quantum-Theory-9.9.25-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250911T090000
DTEND;TZID=America/New_York:20250912T170000
DTSTAMP:20260411T144951
CREATED:20250502T175902Z
LAST-MODIFIED:20251026T044243Z
UID:10003743-1757581200-1757696400@cmsa.fas.harvard.edu
SUMMARY:Big Data Conference 2025
DESCRIPTION:Big Data Conference 2025 \nDates: Sep. 11–12\, 2025 \nLocation: Harvard University CMSA\, 20 Garden Street\, Cambridge & via Zoom \nThe Big Data Conference features speakers from the Harvard community as well as scholars from across the globe\, with talks focusing on computer science\, statistics\, math and physics\, and economics. \nInvited Speakers \n\nMarkus J. Buehler\, MIT\nYiling Chen\, Harvard\nJordan Ellenberg\, UW Madison\nYue M. Lu\, Harvard\nPankaj Mehta\, BU\nNick Patterson\, Harvard\nGautam Reddy\, Princeton\nTrevor David Rhone\, Rensselaer Polytechnic Institute\nTess Smidt\, MIT\n\nOrganizers: \nMichael M. Desai\, Harvard OEB |  Michael R. Douglas\, Harvard CMSA | Yannai A. Gonczarowski\, Harvard Economics | Efthimios Kaxiras\, Harvard Physics | Melanie Weber\, Harvard SEAS \n  \nBig Data Youtube Playlist \n  \nSchedule \nThursday\, Sep. 11\, 2025 \n  \n\n\n\n9:00 am\nRefreshments\n\n\n9:30 am\nIntroductions\n\n\n9:45–10:45 am\nGautam Reddy\, Princeton \nTitle: Global epistasis in genotype-phenotype maps\n\n\n10:45–11:00 am\nBreak\n\n\n11:00 am –12:00 pm\nNick Patterson\, Harvard \nTitle: The Origin of the Indo-Europeans \nAbstract: Indo-European is the largest family of human languages\, with very wide geographical distribution and more than 3 billion native speakers. How did this family arise and spread? This question has been discussed for nearly 250 years but with the advent of the availability of DNA from ancient fossils is now largely understood\, at least in broad outlines. We will describe what we now know about the origins.\n\n\n12:00–1:30 pm\nLunch break\n\n\n1:30–2:30 pm\nMarkus Buehler\, MIT \nTitle: Superintelligence for scientific discovery \nAbstract: AI is moving beyond prediction to become a partner in invention. While today’s models excel at interpolating within known data\, true discovery requires stepping outside existing truths. This talk introduces superintelligent discovery engines built on multi-agent swarms: diverse AI agents that interact\, compete\, and cooperate to generate structured novelty. Guided by Gödel’s insight that no closed system is complete\, these swarms create gradients of difference – much like temperature gradients in thermodynamics – that sustain flow\, invention\, and surprise. Case studies in protein design and music composition show how swarms escape data biases\, invent novel structures\, and weave long-range coherence\, producing creativity that rivals human processes. By moving from “big data” to “big insight”\, these systems point toward a new era of AI that composes knowledge across science\, engineering\, and the arts.\n\n\n2:30–2:45 pm\nBreak\n\n\n2:45–3:45 pm\nJordan Ellenberg\, UW Madison \nTitle: What does machine learning have to offer mathematics?\n\n\n3:45–4:00 pm\nBreak\n\n\n4:00–5:00 pm\nPankaj Mehta\, Boston University \nTitle: Thinking about high-dimensional biological data in the age of AI \nAbstract: The molecular biology revolution has transformed our view of living systems. Scientific explanations of biological phenomena are now synonymous with the identification of the genes and proteins. The preeminence of the molecular paradigm has only become more pronounced as new technologies allow us to make measurements at scale. Combining this wealth of data with new artificial intelligence (AI) techniques is widely viewed as the future of biology. Here\, I will discuss the promise and perils of this approach. I will focus on our unpublished work with collaborators on two fronts: (i) transformer-based models for understanding genotype-to-phenotype maps\, and (ii) LLM-based ‘foundational models’ for cellular identity\, such as TranscriptFormer\, which is trained on single-cell RNA sequencing (scRNAseq) data. While LLMs excel at capturing complex evolutionary and demographic structure in DNA sequence data\, they are much less adept at elucidating the biology of cellular identity. We show that simple parameter-free models based on linear-algebra outperform TranscriptFormer on downstream tasks related to cellular identity\, even though TranscriptFormer has nearly a billion parameters. If time permits\, I will conclude by showing how we can combine ideas from linear algebra\, bifurcation theory\, and statistical physics to classify cell fate transitions using scRNAseq data.\n\n\n\n  \nFriday\, Sep. 12\, 2025  \n\n\n\n9:00-9:45 am\nRefreshments\n\n\n9:45–10:45 am\nYiling Chen\, Harvard \nTitle: Data Reliability Scoring \nAbstract: Imagine you are trying to make a data-driven decision\, but the data at hand may be noisy\, biased\, or even strategically manipulated. Can you assess whether such a dataset is reliable—without access to ground truth?\nWe initiate the study of reliability scoring for datasets reported by potentially strategic data sources. While the true data remain unobservable\, we assume access to auxiliary observations generated by an unknown statistical process that depends on the truth. We introduce the Gram Determinant Score\, a reliability measure that evaluates how well the reported data align with the unobserved truth\, using only the reported data and the auxiliary observations. The score comes with provable guarantees: it preserves several natural reliability orderings. Experimentally\, it effectively captures data quality in settings with synthetic noise and contrastive learning embeddings.\nThis talk is based on joint work with Shi Feng\, Fang-Yi Yu\, and Paul Kattuman.\n\n\n10:45–11:00 am\nBreak\n\n\n11:00 am –12:00 pm\nYue M. Lu\, Harvard \nTitle: Nonlinear Random Matrices in High-Dimensional Estimation and Learning \nAbstract: In recent years\, new classes of structured random matrices have emerged in statistical estimation and machine learning. Understanding their spectral properties has become increasingly important\, as these matrices are closely linked to key quantities such as the training and generalization performance of large neural networks and the fundamental limits of high-dimensional signal recovery. Unlike classical random matrix ensembles\, these new matrices often involve nonlinear transformations\, introducing additional structural dependencies that pose challenges for traditional analysis techniques. \nIn this talk\, I will present a set of equivalence principles that establish asymptotic connections between various nonlinear random matrix ensembles and simpler linear models that are more tractable for analysis. I will then demonstrate how these principles can be applied to characterize the performance of kernel methods and random feature models across different scaling regimes and to provide insights into the in-context learning capabilities of attention-based Transformer networks.\n\n\n12:00–1:30 pm\nLunch break\n\n\n1:30–2:30 pm\nTrevor David Rhone\, Rensselaer Polytechnic Institute \nTitle: Accelerating the discovery of van der Waals quantum materials using AI \nAbstract: van der Waals (vdW) materials are exciting platforms for studying emergent quantum phenomena\, ranging from long-range magnetic order to topological order. A conservative estimate for the number of candidate vdW materials exceeds ~106 for monolayers and ~1012 for heterostructures. How can we accelerate the exploration of this entire space of materials? Can we design quantum materials with desirable properties\, thereby advancing innovation in science and technology? A recent study showed that artificial intelligence (AI) can be harnessed to discover new vdW Heisenberg ferromagnets based on Cr2Ge2Te6 [1]\, [2] and magnetic vdW topological insulators based on MnBi2Te4 [3]. In this talk\, we will harness AI to efficiently explore the large chemical space of vdW materials and to guide the discovery of vdW materials with desirable spin and charge properties. We will focus on crystal structures based on monolayer Cr2I6 of the form A2X6\, which are studied using density functional theory (DFT) calculations and AI. Magnetic properties\, such as the magnetic moment are determined. The formation energy is also calculated and used as a proxy for the chemical stability. We also investigate monolayers based on MnBi2Te4 of the form AB2X4 to identify novel topological materials. Further to this\, we study heterostructures based on MnBi2Te4/Sb2Te3 stacks. We show that AI\, combined with DFT\, can provide a computationally efficient means to predict the thermodynamic and magnetic properties of vdW materials [4]\,[5]. This study paves the way for the rapid discovery of chemically stable vdW quantum materials with applications in spintronics\, magnetic memory and novel quantum computing architectures.\n[1]        T. D. Rhone et al.\, “Data-driven studies of magnetic two-dimensional materials\,” Sci. Rep.\, vol. 10\, no. 1\, p. 15795\, 2020.\n[2]        Y. Xie\, G. Tritsaris\, O. Granas\, and T. Rhone\, “Data-Driven Studies of the Magnetic Anisotropy of Two-Dimensional Magnetic Materials\,” J. Phys. Chem. Lett.\, vol. 12\, no. 50\, pp. 12048–12054.\n[3]        R. Bhattarai\, P. Minch\, and T. D. Rhone\, “Investigating magnetic van der Waals materials using data-driven approaches\,” J. Mater. Chem. C\, vol. 11\, p. 5601\, 2023.\n[4]        T. D. Rhone et al.\, “Artificial Intelligence Guided Studies of van der Waals Magnets\,” Adv. Theory Simulations\, vol. 6\, no. 6\, p. 2300019\, 2023.\n[5]        P. Minch\, R. Bhattarai\, K. Choudhary\, and T. D. Rhone\, “Predicting magnetic properties of van der Waals magnets using graph neural networks\,” Phys. Rev. Mater.\, vol. 8\, no. 11\, p. 114002\, Nov. 2024.\nThis work used the Extreme Science and Engineering Discovery Environment (XSEDE)\, which is supported by National Science Foundation Grant No. ACI-1548562. This research used resources of the Argonne Leadership Computing Facility\, which is a DOE Office of Science User Facility supported under Contract No. DE-AC02-06CH11357. This material is based on work supported by the National Science Foundation CAREER award under Grant No. 2044842.\n\n\n2:30–2:45 pm\nBreak\n\n\n2:45–3:45 pm\nTess Smidt\, MIT \nTitle: Applications of Euclidean neural networks to understand and design atomistic systems \nAbstract: Atomic systems (molecules\, crystals\, proteins\, etc.) are naturally represented by a set of coordinates in 3D space labeled by atom type. This poses a challenge for machine learning due to the sensitivity of coordinates to 3D rotations\, translations\, and inversions (the symmetries of 3D Euclidean space). Euclidean symmetry-equivariant Neural Networks (E(3)NNs) are specifically designed to address this issue. They faithfully capture the symmetries of physical systems\, handle 3D geometry\, and operate on the scalar\, vector\, and tensor fields that characterize these systems. \nE(3)NNs have achieved state-of-the-art results across atomistic benchmarks\, including small-molecule property prediction\, protein-ligand binding\, force prediciton for crystals\, molecules\, and heterogeneous catalysis. By merging neural network design with group representation theory\, they provide a principled way to embed physical symmetries directly into learning. In this talk\, I will survey recent applications of E(3)NNs to materials design and highlight ongoing debates in the AI for atomistic sciences community: how to balance the incorporation of physical knowledge with the drive for engineering efficiency.\n\n\n\n 
URL:https://cmsa.fas.harvard.edu/event/bigdata_2025/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Big Data Conference,Conference,Event
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Big-Data-2025_11x17.9-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250915T090000
DTEND;TZID=America/New_York:20250918T170000
DTSTAMP:20260411T144951
CREATED:20250710T134311Z
LAST-MODIFIED:20250930T154307Z
UID:10003755-1757926800-1758214800@cmsa.fas.harvard.edu
SUMMARY:The Geometry of Machine Learning
DESCRIPTION:The Geometry of Machine Learning \nDates: September 15–18\, 2025 \nLocation: Harvard CMSA\, Room G10\, 20 Garden Street\, Cambridge MA 02138 \nDespite the extraordinary progress in large language models\, mathematicians suspect that other dimensions of intelligence must be defined and simulated to complete the picture. Geometric and symbolic reasoning are among these. In fact\, there seems to be much to learn about existing ML by considering it from a geometric perspective\, e.g. what is happening to the data manifold as it moves through a NN?  How can geometric and symbolic tools be interfaced with LLMs? A more distant goal\, one that seems only approachable through AIs\, would be to gain some insight into the large-scale structure of mathematics as a whole: the geometry of math\, rather than geometry as a subject within math. This conference is intended to begin a discussion on these topics. \nSpeakers \n\nMaissam Barkeshli\, University of Maryland\nEve Bodnia\, Logical Intelligence\nAdam Brown\, Stanford\nBennett Chow\, USCD & IAS\nMichael Freedman\, Harvard CMSA\nElliot Glazer\, Epoch AI\nJames Halverson\, Northeastern\nJesse Han\, Math Inc.\nJunehyuk Jung\, Brown University\nAlex Kontorovich\, Rutgers University\nYann Lecun\, New York University & META*\nJared Duker Lichtman\, Stanford  & Math Inc.\nBrice Ménard\, Johns Hopkins\nMichael Mulligan\, UCR & Logical Intelligence\nPatrick Shafto\, DARPA & Rutgers University\n\nOrganizers: Michael R. Douglas (CMSA) and Mike Freedman (CMSA) \n  \nGeometry of Machine Learning Youtube Playlist \n  \nSchedule \nMonday\, Sep. 15\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nJames Halverson\, Northeastern \nTitle: Sparsity and Symbols with Kolmogorov-Arnold Networks \nAbstract: In this talk I’ll review Kolmogorov-Arnold nets\, as well as new theory and applications related to sparsity and symbolic regression\, respectively.  I’ll review essential results regarding KANs\, show how sparsity masks relate deep nets and KANs\, and how KANs can be utilized alongside multimodal language models for symbolic regression. Empirical results will necessitate a few slides\, but the bulk will be chalk.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nMaissam Barkeshli\, University of Maryland \nTitle: Transformers and random walks: from language to random graphs \nAbstract: The stunning capabilities of large language models give rise to many questions about how they work and how much more capable they can possibly get. One way to gain additional insight is via synthetic models of data with tunable complexity\, which can capture the basic relevant structures of real data. In recent work we have focused on sequences obtained from random walks on graphs\, hypergraphs\, and hierarchical graphical structures. I will present some recent empirical results for work in progress regarding how transformers learn sequences arising from random walks on graphs. The focus will be on neural scaling laws\, unexpected temperature-dependent effects\, and sample complexity.\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–1:00 pm\nAdam Brown\, Stanford \nTitle: LLMs\, Reasoning\, and the Future of Mathematical Sciences \nAbstract: Over the last half decade\, the mathematical capabilities of large language models (LLMs) have leapt from preschooler to undergraduate and now beyond. This talk reviews recent progress\, and speculates as to what it will mean for the future of mathematical sciences if these trends continue.\n\n\n\n  \nTuesday\, Sep. 16\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nJunehyuk Jung\, Brown University \nTitle: AlphaGeometry: a step toward automated math reasoning \nAbstract: Last summer\, Google DeepMind’s AI systems made headlines by achieving Silver Medal level performance on the notoriously challenging International Mathematical Olympiad (IMO) problems. For instance\, AlphaGeometry 2\, one of these remarkable systems\, solved the geometry problem in a mere 19 seconds! \nIn this talk\, we will delve into the inner workings of AlphaGeometry\, exploring the innovative techniques that enable it to tackle intricate geometric puzzles. We will uncover how this AI system combines the power of neural networks with symbolic reasoning to discover elegant solutions.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nBennett Chow\, USCD and IAS \nTitle: Ricci flow as a test for AI\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–1:00 pm\nJared Duker Lichtman\, Stanford & Math Inc. and Jesse Han\, Math Inc. \nTitle: Gauss – towards autoformalization for the working mathematician \nAbstract: In this talk we’ll highlight some recent formalization progress using a new agent – Gauss. We’ll outline a recent Lean proof of the Prime Number Theorem in strong form\, completing a challenge set in January 2024 by Alex Kontorovich and Terry Tao. We hope Gauss will help assist working mathematicians\, especially those who do not write formal code themselves.\n\n\n5:00–6:00 pm\nSpecial Lecture: Yann LeCun\, Science Center Hall C\n\n\n\n  \nWednesday\, Sep. 17\, 2025 \n\n\n\n8:30–9:00 am\nRefreshments\n\n\n9:00–10:00 am\nMichael Mulligan\, UCR and Logical Intelligence \nTitle: Spontaneous Kolmogorov-Arnold Geometry in Vanilla Fully-Connected Neural Networks \nAbstract: The Kolmogorov-Arnold (KA) representation theorem constructs universal\, but highly non-smooth inner functions (the first layer map) in a single (non-linear) hidden layer neural network. Such universal functions have a distinctive local geometry\, a “texture\,” which can be characterized by the inner function’s Jacobian\, $J(\mathbf{x})$\, as $\mathbf{x}$ varies over the data. It is natural to ask if this distinctive KA geometry emerges through conventional neural network optimization. We find that indeed KA geometry often does emerge through the process of training vanilla single hidden layer fully-connected neural networks (MLPs). We quantify KA geometry through the statistical properties of the exterior powers of $J(\mathbf{x})$: number of zero rows and various observables for the minor statistics of $J(\mathbf{x})$\, which measure the scale and axis alignment of $J(\mathbf{x})$. This leads to a rough phase diagram in the space of function complexity and model hyperparameters where KA geometry occurs. The motivation is first to understand how neural networks organically learn to prepare input data for later downstream processing and\, second\, to learn enough about the emergence of KA geometry to accelerate learning through a timely intervention in network hyperparameters. This research is the “flip side” of KA-Networks (KANs). We do not engineer KA into the neural network\, but rather watch KA emerge in shallow MLPs.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nEve Bodnia\, Logical Intelligence \nTitle: \nAbstract: We introduce a method of topological analysis on spiking correlation networks in neurological systems. This method explores the neural manifold as in the manifold hypothesis\, which posits that information is often represented by a lower-dimensional manifold embedded in a higher-dimensional space. After collecting neuron activity from human and mouse organoids using a micro-electrode array\, we extract connectivity using pairwise spike-timing time correlations\, which are optimized for time delays introduced by synaptic delays. We then look at network topology to identify emergent structures and compare the results to two randomized models – constrained randomization and bootstrapping across datasets. In histograms of the persistence of topological features\, we see that the features from the original dataset consistently exceed the variability of the null distributions\, suggesting that the observed topological features reflect significant correlation patterns in the data rather than random fluctuations. In a study of network resiliency\, we found that random removal of 10 % of nodes still yielded a network with a lesser but still significant number of topological features in the homology group H1 (counts 2-dimensional voids in the dataset) above the variability of our constrained randomization model; however\, targeted removal of nodes in H1 features resulted in rapid topological collapse\, indicating that the H1 cycles in these brain organoid networks are fragile and highly sensitive to perturbations. By applying topological analysis to neural data\, we offer a new complementary framework to standard methods for understanding information processing across a variety of complex neural systems.\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–1:00 pm\nAlex Kontorovich\, Rutgers University \nTitle: The Shape of Math to Come \nAbstract: We will discuss some ongoing experiments that may have meaningful impact on what working in research mathematics might look like in a decade (if not sooner).\n\n\n5:00–6:00 pm\nMike Freedman Millennium Lecture: The Poincaré Conjecture and Mathematical Discovery (Science Center Hall D)\n\n\n\n  \nThursday\, Sep. 18\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nElliott Glazer\, Epoch AI \nTitle: FrontierMath to Infinity \nAbstract: I will discuss FrontierMath\, a mathematical problem solving benchmark I developed over the past year\, including its design philosophy and what we’ve learned about AI’s trajectory from it. I will then look much further out\, speculate about what a “perfectly efficient” mathematical intelligence should be capable of\, and discuss how high-ceiling math capability metrics can illuminate the path towards that ideal.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nBrice Ménard\, Johns Hopkins \nTitle:Demystifying the over-parametrization of neural networks \nAbstract: I will show how to estimate the dimensionality of neural encodings (learned weight structures) to assess how many parameters are effectively used by a neural network. I will then show how their scaling properties provide us with fundamental exponents on the learning process of a given task. I will comment on connections to thermodynamics.\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–12:30 pm\nPatrick Shafto\, Rutgers \nTitle: Math for AI and AI for Math \nAbstract: I will briefly discuss two DARPA programs aiming to deepen connections between mathematics and AI\, specifically through geometric and symbolic perspectives. The first aims for mathematical foundations for understanding the behavior and performance of modern AI systems such as Large Language Models and Diffusion models. The second aims to develop AI for pure mathematics through an understanding of abstraction\, decomposition\, and formalization. I will close with some thoughts on the coming convergence between AI and math.\n\n\n12:30–12:45 pm\nBreak\n\n\n12:45–2:00 pm\nMike Freedman\, Harvard CMSA \nTitle: How to think about the shape of mathematics \nFollowed by group discussion \n \n\n\n\n  \n  \n  \nSupport provided by Logical Intelligence. \n \n  \n 
URL:https://cmsa.fas.harvard.edu/event/mlgeometry/
LOCATION:CMSA 20 Garden Street Cambridge\, Massachusetts 02138 United States
CATEGORIES:Conference,Event
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/GML_2025.7-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250915T150000
DTEND;TZID=America/New_York:20250915T160000
DTSTAMP:20260411T144951
CREATED:20250910T193835Z
LAST-MODIFIED:20250910T194841Z
UID:10003788-1757948400-1757952000@cmsa.fas.harvard.edu
SUMMARY:Orientifolds for F-theory on K3 Surfaces
DESCRIPTION:Quantum Field Theory and Physical Mathematics Seminar \nSpeaker: Chuck Doran (Alberta/CMSA) \nTitle: Orientifolds for F-theory on K3 Surfaces \nAbstract: Compactification of F-theory on an elliptically fibered K3 surface provides a framework to encode type IIB string theory on elliptic curves\, with the Kaehler modulus of the elliptic curve encoded in the complex structure of the elliptic fibers. In work with Malmendier\, Mendez-Diez\, and Rosenberg we extend that perspective by examining F-theory orientifolds on elliptically fibered K3 surfaces and connecting them to D-brane classifications using real K-theory (KR-theory).  The real structures—antiholomorphic involutions—on our K3 surfaces connect the geometry with the physics\, providing a natural setting for understanding the interplay between elliptic fibration structures and D-brane classifications in F-theory. We construct Real normal forms with their associated antiholomorphic involutions and use this to make explicit the 2-torsion Brauer twist that relates our normal forms to the Jacobian (Weierstrass normal form) elliptic fibration\, including the realization of a representative for the twisting class as an Azumaya algebra. This all connects back to the physics by considering three families of real K3 surfaces whose string limits give the three diﬀerent type IIB theories on P1 with four type I_0^∗ Kodaira fibers.
URL:https://cmsa.fas.harvard.edu/event/qft_91525/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Quantum Field Theory and Physical Mathematics
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-QFT-and-Physical-Mathematics-9.15.25-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250915T163000
DTEND;TZID=America/New_York:20250915T173000
DTSTAMP:20260411T144951
CREATED:20250904T152315Z
LAST-MODIFIED:20250904T152759Z
UID:10003776-1757953800-1757957400@cmsa.fas.harvard.edu
SUMMARY:Topological Manifolds – The First 100 Years
DESCRIPTION:Colloquium \nSpeaker: Michael Freedman (Harvard CMSA and Logical Intelligence) \nTitle: Topological Manifolds – The First 100 Years \nAbstract: I’ll review manifold topology in the topological category from its start with work of Rado (1925) and Kneser (1926) to the present. Work of Moise\, Mazur\, Kirby\, Siebenmann\, Sullivan\, Kruskal\, and the speaker will be discussed. In my view there is one pressing open question (the A-B slice problem). I will end with some thoughts on putting an AI to work on it. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/colloquium-91525/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Colloquium
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Colloquium-9.15.2025-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250916T170000
DTEND;TZID=America/New_York:20250916T180000
DTSTAMP:20260411T144951
CREATED:20250807T142820Z
LAST-MODIFIED:20250922T134159Z
UID:10003760-1758042000-1758045600@cmsa.fas.harvard.edu
SUMMARY:Geometry of Machine Learning Special Lecture: Yann LeCun
DESCRIPTION:Geometry of Machine Learning Special Lecture: Yann LeCun \nTitle: Self-Supervised Learning\, JEPA\, World Models\, and the future of AI \nDate: Tuesday\, Sep. 16\, 2025 \nTime: 5:00 pm ET \nLocation: Harvard Science Center\, Hall C & via Zoom Webinar
URL:https://cmsa.fas.harvard.edu/event/lecun91625/
LOCATION:Harvard Science Center\, 1 Oxford Street\, Cambridge\, MA\, 02138
CATEGORIES:Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/YannLeCun_GML-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250917T170000
DTEND;TZID=America/New_York:20250917T180000
DTSTAMP:20260411T144951
CREATED:20250311T134916Z
LAST-MODIFIED:20251010T115024Z
UID:10003656-1758128400-1758132000@cmsa.fas.harvard.edu
SUMMARY:Millennium Prize Problems Lecture - Michael Freedman: The Poincaré Conjecture and Mathematical Discovery  
DESCRIPTION:Millennium Prize Problems Lecture\nDate: September 17\, 2025 \nLocation: Harvard Science Center Hall D & via Zoom Webinar \nTime: 5:00–6:00 pm \nSpeaker: Michael Freedman\, Harvard CMSA and Logical Intelligence  \nTitle: The Poincaré Conjecture and Mathematical Discovery   \nAbstract: The AI age requires us to re-examine what mathematics is about. The Seven Millenium Problems provide an ideal lens for doing so. Five of the seven are core mathematical questions\, two are meta-mathematical – asking about the scope of mathematics. The Poincare conjecture represents one of the core subjects\, manifold topology. I’ll explain what it is about\, its broader context\, and why people cared so much about finding a solution\, which ultimately arrived through the work of R. Hamilton and G. Perelman. Although stated in manifold topology\, the proof requires vast developments in the theory of parabolic partial differential equations\, some of which I will sketch. Like most powerful techniques\, the methods survive their original objectives and are now deployed widely in both three- and four-dimensional manifold topology.  \n  \nRead more about the Poincaré Conjecture at the Clay Math website. \nOrganizers: Martin Bridson\, Clay Mathematics Institute | Dan Freed\, Harvard University and CMSA | Mike Hopkins\, Harvard University \n\n                   \n\nMillennium Prize Problems Lecture Series
URL:https://cmsa.fas.harvard.edu/event/clay_91725/
LOCATION:Harvard Science Center Hall D\, 1 Oxford Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Millennium Prize Problems Lecture,Special Lectures
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Freedman_web_ad.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250918T160000
DTEND;TZID=America/New_York:20250918T170000
DTSTAMP:20260411T144951
CREATED:20250904T162209Z
LAST-MODIFIED:20250910T174655Z
UID:10003777-1758211200-1758214800@cmsa.fas.harvard.edu
SUMMARY:Moduli spaces of 4d N=2 quantum field theories
DESCRIPTION:Differential Geometry and Physics Seminar  \nSpeaker: Robert Moscrop\, CMSA \nTitle: Moduli spaces of 4d N=2 quantum field theories \nAbstract: Supersymmetry endows quantum field theories with several rich algebraic and geometric structures associated to their moduli space of vacua\, providing powerful tools to study such theories non-perturbatively. For example\, in four-dimensional theories with eight supercharges\, the low energy dynamics of the theory is captured by an algebraic completely integrable system whose base is the Coulomb branch– a particular distinguished submanifold of the moduli space. This structure is so tightly constrained\, that there is an ongoing program to classify such theories purely by understanding their Coulomb branch geometry. In this talk\, I will give a gentle introduction to the geometry of the moduli spaces of 4d N=2 theories and\, time permitting\, discuss some recent results showcasing how the geometry of the Coulomb branch can be used to constrain certain physical quantities of the theory. \n  \n  \n 
URL:https://cmsa.fas.harvard.edu/event/dgphys_91825/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Differential Geometry and Physics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/DG-Physics-Seminar-9.18.2025-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250919T120000
DTEND;TZID=America/New_York:20250919T130000
DTSTAMP:20260411T144951
CREATED:20241211T195345Z
LAST-MODIFIED:20250918T184123Z
UID:10003648-1758283200-1758286800@cmsa.fas.harvard.edu
SUMMARY:Top-Down Perspectives on Symmetry Theories
DESCRIPTION:Member Seminar \nSpeaker: Max Hubner \nTitle: Top-Down Perspectives on Symmetry Theories \nAbstract: I will review the construction and utility of symmetry theories for string constructed quantum field theories. Symmetry theories are extra-dimensional auxiliary theories separating aspects of a quantum field theory’s symmetries from many of its more messy features. For QFTs with extra-dimensional string constructions the symmetry theory derives directly from the extra-dimensional geometry. This perspective allows for the study of symmetries of famously string engineered systems\, such as SCFTs in 5D and 6D\, which we will discuss on an example by example basis.
URL:https://cmsa.fas.harvard.edu/event/member-seminar-91925/
LOCATION:Common Room\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Member Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Member-Seminar-9.19.25-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250922T150000
DTEND;TZID=America/New_York:20250922T160000
DTSTAMP:20260411T144951
CREATED:20250826T190916Z
LAST-MODIFIED:20250917T134457Z
UID:10003761-1758553200-1758556800@cmsa.fas.harvard.edu
SUMMARY:Non-Supersymmetric Orbifolds\, Quivers and Chen-Ruan Orbifold Cohomology
DESCRIPTION:Quantum Field Theory and Physical Mathematics Seminar \nSpeaker: Max Hübner (Uppsala & CMSA) \nTitle: Non-Supersymmetric Orbifolds\, Quivers and Chen-Ruan Orbifold Cohomology \nAbstract: We consider D3-brane probes of non-supersymmetric orbifolds and IIA on the same class of non-supersymmetric orbifolds. Both setups are characterized\, in part\, by quivers (which in the latter case relate for example to D0-brane probes) from which symmetries constraining the scale-dependence and tachyonic instabilities of the two systems\, respectively\, can be derived. We demonstrate that these considerations can be matched via a geometric analysis of the asymptotic boundary of the relevant orbifolds\, in all cases\, via considerations centered on Chen-Ruan orbifold cohomology.
URL:https://cmsa.fas.harvard.edu/event/qft_92225/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Quantum Field Theory and Physical Mathematics
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-QFT-and-Physical-Mathematics-9.22.25.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250922T163000
DTEND;TZID=America/New_York:20250922T173000
DTSTAMP:20260411T144951
CREATED:20250826T191126Z
LAST-MODIFIED:20250914T170550Z
UID:10003732-1758558600-1758562200@cmsa.fas.harvard.edu
SUMMARY:Turbulent Mixing and Antagonistic Microorganisms
DESCRIPTION:Colloquium \nSpeaker: David Nelson\, Harvard \nTitle: Turbulent Mixing and Antagonistic Microorganisms \nAbstract: Unlike coffee and cream that homogenize when stirred\, growing micro-organisms (e.g.\, bacteria and baker’s yeast) can actively kill each other and avoid mixing.  How do such antagonistic interactions impact the growth and survival of competing strains\, while being spatially advected by turbulent flows?  By using analytic arguments and numerical simulations of a continuum model\, we describe the dynamics of two antagonistic strains that are dispersed by both compressible and incompressible turbulent flows in two spatial dimensions.  A key parameter is the ratio of the fluid transport time to that of biological reproduction\, which determines the winning organism that ultimately takes over the whole population from an initial heterogeneous state\, a process known as fixation.  By quantifying the probability and mean time for fixation\, we discuss how turbulence raises the threshold for biological nucleation and antagonism suppresses flow-induced mixing by depleting the population at interfaces. We highlight the unusual biological consequences of the interplay of turbulent fluid flows with antagonistic population dynamics\, with potential implications for marine microbial ecology and origins of biological chirality.
URL:https://cmsa.fas.harvard.edu/event/colloquium_92225/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Colloquium
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Colloquium-9.22.2025-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250925T160000
DTEND;TZID=America/New_York:20250925T170000
DTSTAMP:20260411T144951
CREATED:20250826T192430Z
LAST-MODIFIED:20250919T142937Z
UID:10003762-1758816000-1758819600@cmsa.fas.harvard.edu
SUMMARY:Degeneration of Calabi-Yau 3-folds and 3-forms
DESCRIPTION:Differential Geometry and Physics Seminar  \nSpeaker: Teng Fei\, Rutgers \nTitle: Degeneration of Calabi-Yau 3-folds and 3-forms \nAbstract: We study the geometries associated to various 3-forms on a symplectic 6-manifold of different orbital types. As an application\, we demonstrate how this can be used to find Lagrangian foliations and other geometric structures of interest arising from certain degeneration of Calabi-Yau 3-folds. \n 
URL:https://cmsa.fas.harvard.edu/event/dgphys_92525/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Differential Geometry and Physics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/DG-Physics-Seminar-9.25.2025.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250926T120000
DTEND;TZID=America/New_York:20250926T130000
DTSTAMP:20260411T144951
CREATED:20250826T193028Z
LAST-MODIFIED:20250918T172135Z
UID:10003763-1758888000-1758891600@cmsa.fas.harvard.edu
SUMMARY:Sections of fibrations onto curves in characteristic p>0
DESCRIPTION:Member Seminar \nSpeaker: Iacopo Brivio \nTitle: Sections of fibrations onto curves in characteristic p>0 \nAbstract: This talk is based on joint work in progress with Ben Church. Using symplectic geometry\, Pieloch showed that every smooth fibration $f\colon X\to \mathbb{P}^1$ of complex projective varieties always admits a section. I will explain how this theorem can be recovered using techniques from Hodge theory and the Minimal Model Program. An advantage of this approach is that it allows for a positive characteristic generalization\, by replacing the Hodge theoretic input by a crystalline one. I will also give an example showing that Pieloch’s result can fail in characteristic p>0.
URL:https://cmsa.fas.harvard.edu/event/member-seminar-92625/
LOCATION:Common Room\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Member Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Member-Seminar-9.26.25-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250929T150000
DTEND;TZID=America/New_York:20250929T160000
DTSTAMP:20260411T144951
CREATED:20250924T181258Z
LAST-MODIFIED:20250924T183325Z
UID:10003795-1759158000-1759161600@cmsa.fas.harvard.edu
SUMMARY:Graph integrals on Kahler manifolds
DESCRIPTION:Quantum Field Theory and Physical Mathematics Seminar \nSpeaker: Minghao Wang\, Boston University \nTitle: Graph integrals on Kahler manifolds \nAbstract: I will talk about my recent work with Junrong Yan. We proved the convergence of Graph integrals on analytic Kahler manifolds in the sense of Cauchy principal values\, which are originally from holomorphic quantum field theories. In particular\, this allows us to construct geometric invariants of Calabi-Yau metrics. I will also talk about some potential applications of our results. References: arXiv:2507.09170\, arXiv:2401.08113
URL:https://cmsa.fas.harvard.edu/event/qft_92925/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Quantum Field Theory and Physical Mathematics
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-QFT-and-Physical-Mathematics-9.29.25-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250930T161500
DTEND;TZID=America/New_York:20250930T183000
DTSTAMP:20260411T144951
CREATED:20250829T204925Z
LAST-MODIFIED:20250929T175811Z
UID:10003775-1759248900-1759257000@cmsa.fas.harvard.edu
SUMMARY:Geometry and Quantum Theory Seminar
DESCRIPTION:Geometry and Quantum Theory Seminar \nSpeaker 1: Max Hubner\, CMSA \nTitle: On Topological Structures in String Theory \nAbstract: Geometric engineering constructions in string theory often realize QFTs relative to an extra-dimensional geometry. This perspective parallels the symmetry TFT construction where a QFT is presented relative to its extra-dimensional symmetry quiche. Unsurprisingly\, as we will discuss\, these constructions are related. Topological features of the extra-dimensional geometry map onto the symmetry TFT. We discuss examples and generalization beyond purely geometric constructions in string theory. \nSpeaker 2: Bowen Yang\, CMSA \nTitle: Bounded L theory \nAbstract: Bounded L-groups arise in the intersection of algebraic L-theory and large-scale geometry\, providing a framework for quadratic forms and automorphisms subject to uniform control conditions. These groups play a role in topology and surgery theory\, especially in contexts where one needs to measure obstructions not just algebraically but also geometrically\, with bounds on propagation or support. In this talk I will give a gentle introduction to the basic definitions\, explain how bounded L-groups differ from classical L-groups\, and outline an application to quantum many body invariants.
URL:https://cmsa.fas.harvard.edu/event/quantumgeo_93025/
LOCATION:Science Center 507\, 1 Oxford Street\, Cambridge\, 02138
CATEGORIES:Geometry and Quantum Theory Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Geometry-Quantum-Theory-9.30.25.png
END:VEVENT
END:VCALENDAR