BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CMSA - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:CMSA
X-ORIGINAL-URL:https://cmsa.fas.harvard.edu
X-WR-CALDESC:Events for CMSA
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240903T090000
DTEND;TZID=America/New_York:20241101T170000
DTSTAMP:20260417T053511
CREATED:20240105T033600Z
LAST-MODIFIED:20250305T175957Z
UID:10001112-1725354000-1730480400@cmsa.fas.harvard.edu
SUMMARY:Mathematics and Machine Learning Program
DESCRIPTION:Mathematics and Machine Learning Program \nDates: September 3 – November 1\, 2024 \nLocation: Harvard CMSA\, 20 Garden Street\, Cambridge\, MA 0213 \nMachine learning and AI are increasingly important tools in all fields of research. Recent milestones in machine learning for mathematics include data-driven discovery of theorems in knot theory and representation theory\, the discovery and proof of new singular solutions of the Euler equations\, new counterexamples and lower bounds in graph theory\, and more. Rigorous numerical methods and interactive theorem proving are playing an important part in obtaining these results. Conversely\, much of the spectacular progress in AI has a surprising simplicity at its core. Surely there are remarkable mathematical structures behind this\, yet to be elucidated. \nThe program will begin and end with two week-long workshops\, and will feature focus weeks on number theory\, knot theory\, graph theory\, rigorous numerics in PDE\, and interactive theorem proving\, as well as a course on geometric aspects of deep learning.\n\n  \nSeptember 3–5\, 2024: Opening Workshop: AI for Mathematicians\, with Leon Bottou\, François Charton\, David McAllester\, Adam Wagner and Geordie Williamson.   A series of six lectures covering logic and theorem proving\, AI methods\, theory of machine learning\, two lectures on case studies in math-AI\, and a lecture and discussion on open problems and the ethics of AI in science.\nOpening Workshop Youtube Playlist \n\nSeptember 6–7\, 2024: Big Data Conference \n  \nSeptember 9–13\, 2024: Applying Machine Learning to Math\, with François Charton and Geordie Williamson\nPublic Lecture September 12\, 2024: Geordie Williamson\, University of Sydney: Can AI help with hard mathematics? (Youtube link)\nThe focus of this week will be on practical examples and techniques for the mathematics researcher keen to explore or deepen their use of AI techniques. We will have talks showcasing easily stated problems\, on which machine learning techniques can be employed profitably. These provide excellent toy examples for generating intuition. We will also have expert talks on some of the technical subtleties which arise. There are several instances where the accepted heuristics emerging from the study of large language models (LLM) and image recognition don’t appear to apply on mathematics problems\, and we will try to highlight these subtleties.\nApplying Machine Learning to Math Youtube Playlist \n  \nSeptember 16–20\, 2024: Number theory\, with Drew Sutherland\nThe focus of this week will be on the use of ML as a tool for finding and understanding statistical patterns in number-theoretic datasets\, using the recently discovered (and still largely unexplained) “murmurations” in the distribution of Frobenius traces in families of elliptic curves and other arithmetic L-functions as a motivating example.\nNumber Theory Youtube Playlist \n  \nSeptember 23–27\, 2024: Knot theory\, with Sergei Gukov\nKnot theory is a great source of labeled data that can be synthetically generated. Moreover\, many outstanding problems in knot theory and low-dimensional topology can be formulated as decision and classification tasks\, e.g. “Is the knot 123_45 slice?” or “Can two given Kirby diagrams be related by a sequence of Kirby moves?” During this focus week we will explore various ways in which AI can be applied to problems in knot theory and how\, based on these applications\, mathematical reasoning can advance development of AI algorithms. Another goal will be to develop formal knot theory libraries (e.g. contributions to mathlib) and to apply AI models to formal proof systems\, in particular in the context of knot theory.\nKnot Theory Youtube Playlist \n  \nSeptember 30: Teaching and Machine Learning Panel Discussion\, 3:30-5:30 pm ET \n  \nSeptember 30–October 4\, 2024: Graph theory and combinatorics\, with Adam Wagner\nThis week\, we will consider how machine learning can help us solve problems in combinatorics and graph theory\, broadly interpreted\, in practice. The advantage of these fields is that they deal with finite objects that are simple to set up using computers\, and programs that work for one problem can often be adapted to work for several other related problems as well. Many times\, the best constructions for a problem are easy to interpret\, making it simpler to judge how well a particular algorithm is performing. On the other hand\, there are lots of open conjectures that are simple to state\, for which the best-known constructions are counterintuitive\, making it perhaps more likely that machine learning methods can spot patterns that are difficult to understand otherwise.\nGraph Theory and Combinatorics Youtube Playlist \n  \nOctober 7–11\, 2024: More number theory\, with Drew Sutherland\nThe focus of this week will be on the use of AI as a tool to search for and/or construct interesting or extremal examples in number theory and arithmetic geometry\, using LLM-based genetic algorithms\, generative adversarial networks\, game-theoretic methods\, and heuristic tree pruning as alternatives to conventional local search strategies.\nMore Number Theory Youtube Playlist \n  \nOctober 14 –18\, 2024: Interactive theorem proving\nThis week we will discuss the use of interactive theorem proving systems such as Lean\, Coq and Isabelle in mathematical research\, and AI systems which prove theorems and translate between informal and formal mathematics.\nInteractive Theorem Proving Youtube Playlist \n  \nOctober 21–25\, 2024: Numerical Partial Differential Equations (PDE)\, with Tristan Buckmaster and Javier Gomez-Serrano\nThe focus of this week will be on constructing solutions to partial differential equations and dynamical systems (finite and infinite dimensional) more broadly defined. We will discuss several toy problems and comment on issues like sampling strategies\, optimization algorithms\, ill-posedness\, or convergence. We will also outline strategies about further developing machine-learning findings and turn them into mathematical theorems via computer-assisted approaches.\nNumerical PDEs Youtube Playlist \n  \nOctober 28–Nov. 1\, 2024: Closing Workshop: The closing workshop will provide a forum for discussing the most current research in these areas\, including work in progress and recent results from program participants.\nMath and Machine Learning Closing Workshop Youtube Playlist \n  \nSeptember 3–Nov. 1: Graduate topics in deep learning theory (Boston College) taught by Eli Grigsby\, held at the CMSA Tuesdays and Thursdays 2:30–3:45 pm Eastern Time. Course website (link).\nGraduate Topics in Deep Learning Youtube Playlist \nCourse description: This is a course on geometric aspects of deep learning theory. Broadly speaking\, we’ll investigate the question: How might human-interpretable concepts be expressed in the geometry of their data encodings\, and how does this geometry interact with the computational units and higher-level algebraic structures in various parameterized function classes\, especially neural network classes? During the portion of the course Sep. 3-Nov. 1\, the course will be presented as part of the Math and Machine Learning program at the CMSA in Cambridge. During that portion\, we will focus on the current state of research on mechanistic interpretability of transformers\, the architecture underlying large language models like Chat-GPT. \n\n\n\n\nPrerequisites: This course is targeted to graduate students and advanced undergraduates in mathematics and theoretical computer science. No prior background in machine learning or learning theory will be assumed\, but I will assume a degree of mathematical maturity (at the level of–say—the standard undergraduate math curriculum+ first-year graduate geometry/topology sequence)\n\n\n\n\n\nProgram Organizers \n\nFrancois Charton (Meta AI)\nMichael R. Douglas (Harvard CMSA)\nMichael Freedman (Harvard CMSA)\nFabian Ruehle (Northeastern)\nGeordie Williamson (Univ. of Sydney)\n\n\nProgram Schedule  \nMonday\n10:30–noon\nOpen Discussion\nRoom G10 \n12:00–1:30 pm\nGroup lunch\nCMSA Common Room \nTuesday\n2:30–3:45 pm\nTopics in deep learning theory\nRoom G10 \n4:00–5:00 pm\nOpen Discussion/Tea\nCMSA Common Room \nWednesday\n10:30 am–12:00 pm\nOpen Discussion\nRoom G10 \n2:00–3:00 pm\nNew Technologies in Mathematics Seminar\nRoom G10 \nThursday\n2:30–3:45 pm\nTopics in deep learning theory\nRoom G10 \nFriday\n10:30 am–12:00 pm\nOpen Discussion\nRoom G10 \n\nHarvard CMSA thanks Mistral AI for a generous donation of computing credit.
URL:https://cmsa.fas.harvard.edu/event/mml2024/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Event,Programs
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Machine-Learning-Program-poster-1.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241014T103000
DTEND;TZID=America/New_York:20241014T120000
DTSTAMP:20260417T053511
CREATED:20240911T195709Z
LAST-MODIFIED:20240911T195709Z
UID:10003481-1728901800-1728907200@cmsa.fas.harvard.edu
SUMMARY:Math and Machine Learning Program Discussion
DESCRIPTION:Math and Machine Learning Program Discussion \n 
URL:https://cmsa.fas.harvard.edu/event/mml_meeting_101424/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:MML Meeting
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241015T110000
DTEND;TZID=America/New_York:20241015T120000
DTSTAMP:20260417T053511
CREATED:20240903T183238Z
LAST-MODIFIED:20241010T180340Z
UID:10003424-1728990000-1728993600@cmsa.fas.harvard.edu
SUMMARY:Gravitational collapse to extremal Reissner-Nordström and the third law of black hole thermodynamics
DESCRIPTION:General Relativity Seminar \nSpeaker: Christoph Kehle\, MIT \nTitle: Gravitational collapse to extremal Reissner-Nordström and the third law of black hole thermodynamics \nAbstract: In this talk\, I will present a proof that extremal Reissner-Nordström black holes can form in finite time in gravitational collapse of charged matter. In particular\, this construction provides a definitive disproof of the “third law” of black hole thermodynamics. I will also discuss recent works showing that extremal black holes take on a central role in gravitational collapse\, giving rise to a new conjectural picture of “extremal critical collapse.” This is joint work with Ryan Unger (Stanford).
URL:https://cmsa.fas.harvard.edu/event/general-relativity-seminar-101524/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:General Relativity Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-GR-Seminar-10.15.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241015T143000
DTEND;TZID=America/New_York:20241015T154500
DTSTAMP:20260417T053511
CREATED:20240930T194515Z
LAST-MODIFIED:20240930T194515Z
UID:10003606-1729002600-1729007100@cmsa.fas.harvard.edu
SUMMARY:Topics in Deep Learning Theory
DESCRIPTION:Topics in Deep Learning Theory \nEli Grigsby
URL:https://cmsa.fas.harvard.edu/event/deeplearning_101524/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Topics in Deep Learning Theory
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241015T161500
DTEND;TZID=America/New_York:20241015T181500
DTSTAMP:20260417T053511
CREATED:20240917T162135Z
LAST-MODIFIED:20240927T182405Z
UID:10003514-1729008900-1729016100@cmsa.fas.harvard.edu
SUMMARY:Topological Modular Forms\, its equivariant refinements and relation with supersymmetric quantum field theories
DESCRIPTION:Geometry and Quantum Theory Seminar \nSpeaker: Mayuko Yamashita\, Kyoto University \nTitle: Topological Modular Forms\, its equivariant refinements and relation with supersymmetric quantum field theories \nAbstract: This talk is about the Segal-Stolz-Teichner program\, which is one of the most deep and interesting topics relating homotopy theory and physics. Mathematically\, they propose a geometric model of TMF\, the spectrum (in homotopy theory) of Topological Modular Forms\, in terms of supersymmetric quantum field theories. Their proposal\, although far from solid formulation or a proof\, has been a guiding principle leading us to many new interesting ideas and discoveries in both mathematics and physics. In this talk\, I will give an overview of this topic\, as well as my current works using equivariant twisted TMF.
URL:https://cmsa.fas.harvard.edu/event/quantumgeo_101524/
LOCATION:Science Center Hall E\, 1 Oxford Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Geometry and Quantum Theory Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Geometry-Quantum-Theory-10.15.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241016T103000
DTEND;TZID=America/New_York:20241016T120000
DTSTAMP:20260417T053511
CREATED:20240911T205219Z
LAST-MODIFIED:20240911T205219Z
UID:10003494-1729074600-1729080000@cmsa.fas.harvard.edu
SUMMARY:Math and Machine Learning Program Discussion
DESCRIPTION:Math and Machine Learning Program Discussion \n 
URL:https://cmsa.fas.harvard.edu/event/mml_meeting_101624/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:MML Meeting
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241016T120000
DTEND;TZID=America/New_York:20241016T130000
DTSTAMP:20260417T053511
CREATED:20241015T133229Z
LAST-MODIFIED:20241015T133655Z
UID:10003530-1729080000-1729083600@cmsa.fas.harvard.edu
SUMMARY:CMSA Q&A Seminar: Nazim Bouatta
DESCRIPTION:CMSA Q&A Seminar \nSpeaker: Nazim Bouatta (HMS) \nTopic: What are AlphaFold2 and OpenFold
URL:https://cmsa.fas.harvard.edu/event/cmsaqa_101624/
LOCATION:Common Room\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:CMSA Q&A Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Q-A-Seminar-10.16.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241016T140000
DTEND;TZID=America/New_York:20241016T150000
DTSTAMP:20260417T053511
CREATED:20241010T152711Z
LAST-MODIFIED:20241108T192805Z
UID:10003612-1729087200-1729090800@cmsa.fas.harvard.edu
SUMMARY:From Word Prediction to Complex Skills: Data Flywheels for Mathematical Reasoning
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Anirudh Goyal (University of Montreal) \nTitle: From Word Prediction to Complex Skills: Data Flywheels for Mathematical Reasoning \nAbstract: This talk examines how large language models (LLMs) evolve from simple word prediction to complex skills\, with a focus on mathematical problem solving. A major driver of AI products today is the fact that new skills emerge in language models when their parameter set and training corpora are scaled up. This phenomenon is poorly understood\, and a mechanistic explanation via mathematical analysis of gradient-based training seems difficult. The first part of the talk focuses on analysing emergence using the famous (and empirical) Scaling Laws of LLMs. Then I talk about howc LLMs can verbalize these skills by assigning labels to problems and clustering them into interpretable categories. This metacognitive ability allows us to leverage skill-based prompting\, significantly improving performance on mathematical reasoning. I then present a framework that combines LLMs with human oversight to generate challenging\, out-of-distribution math questions. This process led to the creation of the MATH^2 dataset\, which enhances both model and human performance\, driving further advances in mathematical reasoning capabilities. \n 
URL:https://cmsa.fas.harvard.edu/event/newtech_101624/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-10.16.24.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241016T160000
DTEND;TZID=America/New_York:20241016T170000
DTSTAMP:20260417T053511
CREATED:20240927T150643Z
LAST-MODIFIED:20250328T150459Z
UID:10002915-1729094400-1729098000@cmsa.fas.harvard.edu
SUMMARY:Math Science Lectures in Honor of Raoul Bott: Andrew Neitzke
DESCRIPTION:  \nSpeaker: Andrew Neitzke\, Yale University \nLocation: Harvard University Science Center Hall D & via Zoom webinar \nDates: October 16 & 17\, 2024 \nTime: 4:00 pm \n  \n \nWednesday\, Oct. 16\, 2024 \nTitle: Abelianization in analysis of ODEs \nAbstract: I will describe the exact WKB method for asymptotic analysis of families of ODEs in one variable\, and its interpretation as a kind of abelianization procedure\, which replaces GL(N)-connections over a Riemann surface by GL(1)-connections over an N-fold branched cover. This abelianization procedure connects exact WKB to various subjects in geometry (cluster algebras\, moduli of Higgs bundles\, enumerative geometry). One application is a conjectural description of Hitchin’s hyperkahler metric on the moduli of Higgs bundles; I will review some recent progress on these conjectures. \n  \n \nThursday\, Oct. 17\, 2024 \nTitle: Abelianization in quantum topology \nAbstract: I will describe new applications of abelianization to various related subjects: perturbative Chern-Simons invariants\, skein algebras\, and conformal blocks. The aim is to explain how abelianization gives a unifying perspective on constructions familiar in each of these subjects (e.g. dilogarithmic formulas for Chern-Simons invariants\, vertex models for computing quantum invariants of links\, and iterated-fusion constructions of conformal blocks for the Virasoro algebra)\, and also suggests various extensions\, which are just beginning to be explored. \n  \n\nRaoul Bott (9/24/1923 – 12/20/2005) is known for the Bott periodicity theorem\, the Morse–Bott functions\, and the Borel–Bott–Weil theorem. 
URL:https://cmsa.fas.harvard.edu/event/mathscibott_1024/
LOCATION:Harvard Science Center\, 1 Oxford Street\, Cambridge\, MA\, 02138
CATEGORIES:Event,Math Science Lectures in Honor of Raoul Bott,Public Lecture,Special Lectures
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/Bott-Lecture_Neitzke_11x17.1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241017T143000
DTEND;TZID=America/New_York:20241017T154500
DTSTAMP:20260417T053511
CREATED:20240930T195928Z
LAST-MODIFIED:20240930T195928Z
UID:10003607-1729175400-1729179900@cmsa.fas.harvard.edu
SUMMARY:Topics in Deep Learning Theory
DESCRIPTION:Topics in Deep Learning Theory \nEli Grigsby
URL:https://cmsa.fas.harvard.edu/event/deeplearning_101724/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Topics in Deep Learning Theory
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241017T160000
DTEND;TZID=America/New_York:20241017T170000
DTSTAMP:20260417T053511
CREATED:20240927T150813Z
LAST-MODIFIED:20250409T192551Z
UID:10002916-1729180800-1729184400@cmsa.fas.harvard.edu
SUMMARY:Math Science Lectures in Honor of Raoul Bott: Andrew Neitzke
DESCRIPTION:Speaker: Andrew Neitzke\, Yale University \nLocation: Harvard University Science Center Hall D & via Zoom webinar \nDates: October 16 & 17\, 2024 \nTime: 4:00 pm \n  \n \nWednesday\, Oct. 16\, 2024 \nTitle: Abelianization in analysis of ODEs \nAbstract: I will describe the exact WKB method for asymptotic analysis of families of ODEs in one variable\, and its interpretation as a kind of abelianization procedure\, which replaces GL(N)-connections over a Riemann surface by GL(1)-connections over an N-fold branched cover. This abelianization procedure connects exact WKB to various subjects in geometry (cluster algebras\, moduli of Higgs bundles\, enumerative geometry). One application is a conjectural description of Hitchin’s hyperkahler metric on the moduli of Higgs bundles; I will review some recent progress on these conjectures. \n  \n \nThursday\, Oct. 17\, 2024 \nTitle: Abelianization in quantum topology \nAbstract: I will describe new applications of abelianization to various related subjects: perturbative Chern-Simons invariants\, skein algebras\, and conformal blocks. The aim is to explain how abelianization gives a unifying perspective on constructions familiar in each of these subjects (e.g. dilogarithmic formulas for Chern-Simons invariants\, vertex models for computing quantum invariants of links\, and iterated-fusion constructions of conformal blocks for the Virasoro algebra)\, and also suggests various extensions\, which are just beginning to be explored. \n\n  \nRaoul Bott (9/24/1923 – 12/20/2005) is known for the Bott periodicity theorem\, the Morse–Bott functions\, and the Borel–Bott–Weil theorem. 
URL:https://cmsa.fas.harvard.edu/event/mathscibott_1024-2/
LOCATION:Harvard Science Center\, 1 Oxford Street\, Cambridge\, MA\, 02138
CATEGORIES:Event,Math Science Lectures in Honor of Raoul Bott,Public Lecture,Special Lectures
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/Bott-Lecture_Neitzke_11x17.1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241018T090000
DTEND;TZID=America/New_York:20241018T100000
DTSTAMP:20260417T053511
CREATED:20240907T193958Z
LAST-MODIFIED:20241015T143755Z
UID:10003468-1729242000-1729245600@cmsa.fas.harvard.edu
SUMMARY:Bosonic and fermionic 1-form symmetries and anomaly matching
DESCRIPTION:Quantum Field Theory and Physical Mathematics Seminar \n*via Zoom only* \nSpeaker: Rajath Radhakrishnan (ICTP\, Trieste) \nTitle: Bosonic and fermionic 1-form symmetries and anomaly matching \nAbstract: In this talk\, I will consider bosonic and fermionic (non-invertible) 1-form symmetries in 2+1d QFTs. These are 1-form symmetries implemented by topological line operators with real spins. I will present a classification of topological quantum field theories in which all line operators have real topological spins\, and use this framework to classify the anomalies associated with these 1-form symmetries. Additionally\, I will discuss the anomaly matching condition for these symmetries under an RG flow. I will illustrate this condition in concrete examples.
URL:https://cmsa.fas.harvard.edu/event/qm_101824/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Quantum Field Theory and Physical Mathematics
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-QFT-10.18.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241018T103000
DTEND;TZID=America/New_York:20241018T120000
DTSTAMP:20260417T053511
CREATED:20240912T145729Z
LAST-MODIFIED:20240912T145729Z
UID:10003503-1729247400-1729252800@cmsa.fas.harvard.edu
SUMMARY:Math and Machine Learning Program Discussion
DESCRIPTION:Math and Machine Learning Program Discussion \n 
URL:https://cmsa.fas.harvard.edu/event/mml_meeting_101824/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:MML Meeting
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241018T120000
DTEND;TZID=America/New_York:20241018T130000
DTSTAMP:20260417T053511
CREATED:20240919T144412Z
LAST-MODIFIED:20241015T180358Z
UID:10003521-1729252800-1729256400@cmsa.fas.harvard.edu
SUMMARY:Positive mass and rigidity theorems in Riemannian geometry  
DESCRIPTION:Member Seminar \nSpeaker: Puskar Mondal \nTitle: Positive mass and rigidity theorems in Riemannian geometry \nAbstract: Positive mass theorem proved by Schoen-Yau\, Witten\, Taubes-Parker is one of the most important results in scalar curvature geometry in asymptotically flat settings. Since then several versions have been proven and generalized to other geometries such as asymptotically hyperbolic manifolds. The analogous theorem for strictly positive curvature geometries is absent. There have been counterexamples but a precise quantification does not exist.I prove a scalar curvature rigidity theorem for spheres. In particular\, I prove that $n+1~(n\geq 2)$ dimensional spherical caps with constant positive mean curvature totally umbilic boundaries are rigid under smooth perturbations\, and such rigidity results fail for the hemisphere. The assertion of this result is based on the notion of a real Killing connection and solution of the boundary value problem associated with its Dirac operator. Additionally\, an improved eigenvalue estimate for the Dirac operator on hypersurfaces in positively curved manifolds is obtained.
URL:https://cmsa.fas.harvard.edu/event/member-seminar-101824/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Member Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Member-Seminar-10.18.24.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241021T093000
DTEND;TZID=America/New_York:20241021T103000
DTSTAMP:20260417T053511
CREATED:20240907T155707Z
LAST-MODIFIED:20241018T171738Z
UID:10003446-1729503000-1729506600@cmsa.fas.harvard.edu
SUMMARY:Foundation Seminar: Singularity Theorems\, Part I
DESCRIPTION:Foundation Seminar (Joint Seminar with BHI) \nLocation: BHI \nTitle: Singularity Theorems\, Part I \nJournal Club Discussion
URL:https://cmsa.fas.harvard.edu/event/foundation-seminar_102124/
LOCATION:Black Hole Initiative\, 20 Garden Street\, Cambridge MA\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Foundation Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/10.28.24_Singularity-Theorems-Part-II-4.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241021T103000
DTEND;TZID=America/New_York:20241021T120000
DTSTAMP:20260417T053511
CREATED:20240911T195747Z
LAST-MODIFIED:20240911T195747Z
UID:10003482-1729506600-1729512000@cmsa.fas.harvard.edu
SUMMARY:Math and Machine Learning Program Discussion
DESCRIPTION:Math and Machine Learning Program Discussion \n 
URL:https://cmsa.fas.harvard.edu/event/mml_meeting_102124/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:MML Meeting
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241021T163000
DTEND;TZID=America/New_York:20241021T173000
DTSTAMP:20260417T053511
CREATED:20240903T195022Z
LAST-MODIFIED:20241016T144838Z
UID:10003435-1729528200-1729531800@cmsa.fas.harvard.edu
SUMMARY:Higher Vapnik–Chervonenkis theory
DESCRIPTION:Colloquium \nSpeaker: Artem Chernikov\, University of Maryland \nTitle: Higher Vapnik–Chervonenkis theory \nAbstract: Finite VC-dimension\, a combinatorial property of families of sets\, was discovered simultaneously by Vapnik and Chervonenkis in probabilistic learning theory\, and by Shelah in model theory (where it is called NIP). It plays an important role in several areas including machine learning\, combinatorics\, mathematical logic\, functional analysis and topological dynamics. We develop aspects of higher-order VC-theory\, in particular establishing a generalization of the epsilon-net theorem for families of sets (and functions) on n-fold product spaces with bounded VC_n-dimension (i.e. there is a bound on the sizes of n-dimensional boxes that can be shattered). We obtain some applications in combinatorics and in model theory\, including a strong version of Szemerdi’s regularity lemma for hypergraphs omitting a fixed finite n-partite n-hypergraph. Joint work with Henry Towsner.
URL:https://cmsa.fas.harvard.edu/event/colloquium-102124/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Colloquium
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Colloquium-10.21.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241022T143000
DTEND;TZID=America/New_York:20241022T154500
DTSTAMP:20260417T053511
CREATED:20240930T200000Z
LAST-MODIFIED:20240930T200000Z
UID:10003608-1729607400-1729611900@cmsa.fas.harvard.edu
SUMMARY:Topics in Deep Learning Theory
DESCRIPTION:Topics in Deep Learning Theory \nEli Grigsby
URL:https://cmsa.fas.harvard.edu/event/deeplearning_102224/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Topics in Deep Learning Theory
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241022T160000
DTEND;TZID=America/New_York:20241022T170000
DTSTAMP:20260417T053511
CREATED:20240911T201850Z
LAST-MODIFIED:20240911T201850Z
UID:10003488-1729612800-1729616400@cmsa.fas.harvard.edu
SUMMARY:Math and Machine Learning Program Open Discussion/Tea
DESCRIPTION:Open Discussion/Tea
URL:https://cmsa.fas.harvard.edu/event/mml_tea_102224/
LOCATION:Common Room\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:MML Meeting
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241022T161500
DTEND;TZID=America/New_York:20241022T181500
DTSTAMP:20260417T053511
CREATED:20240917T160638Z
LAST-MODIFIED:20241007T195901Z
UID:10003510-1729613700-1729620900@cmsa.fas.harvard.edu
SUMMARY:Fusion 2-Categories and their Classification
DESCRIPTION:Geometry and Quantum Theory Seminar \nSpeaker: Thibault Décoppet\, Harvard University \nTitle: Fusion 2-Categories and their Classification \nAbstract: Categorifying the classical notion of fusion (1-)category\, fusion 2-categories were recently introduced. These objects have found many applications in Physics\, most notably to the classification of topological orders\, but also to the description of non-invertible symmetries in 2+1 dimensions. The first part of this talk will be devoted to reviewing the definition of a fusion 2-category and giving many examples. In the second half\, I will present a remarkable result concerning the Morita theory of fusion 2-categories and explain how it can be used to give a homotopy coherent classification of fusion 2-categories. \n 
URL:https://cmsa.fas.harvard.edu/event/quantumgeo_102224/
LOCATION:Science Center Hall E\, 1 Oxford Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Geometry and Quantum Theory Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Geometry-Quantum-Theory-10.22.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241023T103000
DTEND;TZID=America/New_York:20241023T120000
DTSTAMP:20260417T053511
CREATED:20240911T205240Z
LAST-MODIFIED:20240911T205240Z
UID:10003495-1729679400-1729684800@cmsa.fas.harvard.edu
SUMMARY:Math and Machine Learning Program Discussion
DESCRIPTION:Math and Machine Learning Program Discussion \n 
URL:https://cmsa.fas.harvard.edu/event/mml_meeting_102324/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:MML Meeting
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241023T120000
DTEND;TZID=America/New_York:20241023T130000
DTSTAMP:20260417T053511
CREATED:20241016T180943Z
LAST-MODIFIED:20241016T182816Z
UID:10003531-1729684800-1729688400@cmsa.fas.harvard.edu
SUMMARY:CMSA Q&A Seminar: Dan Freed
DESCRIPTION:CMSA Q&A Seminar \nSpeaker: Dan Freed\, Harvard Mathematics & CMSA \nTopic: What are topological phases of matter?
URL:https://cmsa.fas.harvard.edu/event/cmsaqa_102324/
LOCATION:Common Room\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:CMSA Q&A Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Q-A-Seminar-10.23.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241023T140000
DTEND;TZID=America/New_York:20241023T150000
DTSTAMP:20260417T053511
CREATED:20241021T140701Z
LAST-MODIFIED:20241108T192710Z
UID:10003616-1729692000-1729695600@cmsa.fas.harvard.edu
SUMMARY:How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Aryo Lotfi (EPFL) \nTitle: How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad \nAbstract: Can Transformers predict new syllogisms by composing established ones? More generally\, what type of targets can be learned by such models from scratch? Recent works show that Transformers can be Turing-complete in terms of expressivity\, but this does not address the learnability objective. This paper puts forward the notion of ‘globality degree’ of a target distribution to capture when weak learning is efficiently achievable by regular Transformers\, where the latter measures the least number of tokens required in addition to the tokens histogram to correlate nontrivially with the target. As shown experimentally and theoretically under additional assumptions\, distributions with high globality cannot be learned efficiently. In particular\, syllogisms cannot be composed on long chains. Furthermore\, we show that (i) an agnostic scratchpad cannot help to break the globality barrier\, (ii) an educated scratchpad can help if it breaks the globality at each step\, however not all such scratchpads can generalize to out-of-distribution (OOD) samples\, (iii) a notion of ‘inductive scratchpad’\, that composes the prior information more efficiently\, can both break the globality barrier and improve the OOD generalization. In particular\, some inductive scratchpads can achieve length generalizations of up to 6x for some arithmetic tasks depending on the input formatting.
URL:https://cmsa.fas.harvard.edu/event/newtech_102324/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=application/pdf:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-10.23.24.docx-1-1.pdf
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241024T100000
DTEND;TZID=America/New_York:20241024T110000
DTSTAMP:20260417T053512
CREATED:20241018T143428Z
LAST-MODIFIED:20241018T144254Z
UID:10003595-1729764000-1729767600@cmsa.fas.harvard.edu
SUMMARY:Heterotic Little String Theories and Inequivalent Genus-One Fibrations
DESCRIPTION:Mathematical Physics and Algebraic Geometry Seminar \nSpeaker: Hamza Ahmed\, Northeastern University \nTitle: Heterotic Little String Theories and Inequivalent Genus-One Fibrations \nAbstract: Little String Theories (LSTs) are 6D Supersymmetric quantum field theories (SQFTs) with an additional physical relation called T-duality. This enables us to arrange them into equivalence classes\, where each equivalence class has 6D LSTs that lead to the same 5D effective theory when compactified on a circle. The problem of finding T-dual LSTs can be mapped to the problem of finding inequivalent genus-one fibrations of the same non-compact Calabi-Yau (CY) threefold. For T-dual theories\, certain field theory data is expected to match\, which then implies certain invariants of inequivalent fibrations. Focusing on theories with 8 supercharges (Heterotic LSTs)\, we use this geometry-field theory equivalence to study the T-duality landscape\, particularly in the case where the genus-one fiber does not have a section\, leading to what are called twisted T-dual theories. Based on the excellent agreement we find between the geometry and field theory arguments\, we conjecture the existence of a new class of twisted T-duals for which no geometric construction is known. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/mathphys_102424/
LOCATION:Virtual
CATEGORIES:Mathematical Physics and Algebraic Geometry
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Mathematical-Physics-and-Algebraic-Geometry-10.24.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241024T143000
DTEND;TZID=America/New_York:20241024T154500
DTSTAMP:20260417T053512
CREATED:20240930T200138Z
LAST-MODIFIED:20240930T200138Z
UID:10003609-1729780200-1729784700@cmsa.fas.harvard.edu
SUMMARY:Topics in Deep Learning Theory
DESCRIPTION:Topics in Deep Learning Theory \nEli Grigsby
URL:https://cmsa.fas.harvard.edu/event/deeplearning_102424/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Topics in Deep Learning Theory
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241025T090000
DTEND;TZID=America/New_York:20241025T103000
DTSTAMP:20260417T053512
CREATED:20240907T194046Z
LAST-MODIFIED:20241018T221702Z
UID:10003469-1729846800-1729852200@cmsa.fas.harvard.edu
SUMMARY:The spin-statistics theorem for TFTs
DESCRIPTION:Quantum Field Theory and Physical Mathematics Seminar \nSpeaker: Luuk Stehouwer\, Dalhousie University \nTitle: The spin-statistics theorem for TFTs \nAbstract: In quantum field theory (QFT) the spin-statistics theorem says that in a unitary QFT\, a particle has half-integer spin if and only if it is a fermion. I show how to phrase this statement in the language of functorial field theories. More precisely\, I explain when a functorial field theory “has fermions” and “has spinors” and when they are “related”. I will then restrict to topological field theories (TFTs) and define unitary TFTs. There are counterexamples of the spin-statistics theorem for non-unitary TFTs. I will prove that every unitary TFT satisfies the spin-statistics theorem. \n  \n  \n 
URL:https://cmsa.fas.harvard.edu/event/qm_102524/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Quantum Field Theory and Physical Mathematics
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-QFT-and-Physical-Mathematics-10.25.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241025T103000
DTEND;TZID=America/New_York:20241025T120000
DTSTAMP:20260417T053512
CREATED:20240912T144420Z
LAST-MODIFIED:20240912T145420Z
UID:10003501-1729852200-1729857600@cmsa.fas.harvard.edu
SUMMARY:Math and Machine Learning Program Discussion
DESCRIPTION:Math and Machine Learning Program Discussion \n 
URL:https://cmsa.fas.harvard.edu/event/mml_meeting_102524/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:MML Meeting
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241025T120000
DTEND;TZID=America/New_York:20241025T130000
DTSTAMP:20260417T053512
CREATED:20240919T144515Z
LAST-MODIFIED:20241022T155009Z
UID:10003522-1729857600-1729861200@cmsa.fas.harvard.edu
SUMMARY:Formality Theorem and Webs
DESCRIPTION:Member Seminar \nSpeaker: Ahsan Khan \nTitle: Formality Theorem and Webs \nAbstract: The “formality theorem” of Kontsevich was a key result that implies that every Poisson manifold admits a deformation quantization. I will review the ideas behind the formality theorem and discuss a potentially novel viewpoint on it involving webs and twisted masses.
URL:https://cmsa.fas.harvard.edu/event/member-seminar-102524/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Member Seminar
ATTACH;FMTTYPE=application/pdf:https://cmsa.fas.harvard.edu/media/CMSA-Member-Seminar-10.25.24.docx.pdf
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241025T143000
DTEND;TZID=America/New_York:20241025T173000
DTSTAMP:20260417T053512
CREATED:20240907T191539Z
LAST-MODIFIED:20241010T152044Z
UID:10003466-1729866600-1729877400@cmsa.fas.harvard.edu
SUMMARY:Freedman CMSA Seminar
DESCRIPTION:Freedman CMSA Seminar \n*Note: via Zoom only* \n2:00-3:30 pm ET \nSpeaker: Matt Hastings\, Microsoft Quantum Program \nTitle: Invertible Phases of Matter and Quantum Cellular Automata: Dimensions One to Three \nAbstract: A Quantum Cellular Automaton (QCA) is a *-automorphism of the algebra of local operators. While local quantum circuits provide one example of QCA\, we are most interested in nontrivial QCA which are those which cannot be written as conjugation by a local quantum circuit. For systems in one and two spatial dimensions\, all nontrivial QCA are shifts (i.e.\, translations by some amount)\, up to conjugation by a quantum circuit\, but in three and higher dimensions\, other examples are known. I’ll explain the relation between QCA and a certain “boundary algebra” of operators in one lower spatial dimension\, and also the relation to invertible phases of matter on the boundary\, and use this to explain and motivate some of these results in dimensions one through three. \n  \n3:30-4:00 pm ET \nBreak/Discussion \n  \n4:00-5:30 pm ET \nSpeaker: Lukasz Fidkowski\, U Washington\, Physics \nTitle: Invertible Phases of Matter and Quantum Cellular Automata: Higher dimensions \nAbstract: We discuss the explicit construction of a non-trivial QCA in 3 dimensions\, one which takes the form of multiplication by a discrete Chern-Simons functional in an appropriate basis for the Hilbert space. We relate the non-trivialness of the QCA to the fact that the Chern-Simons action is not the integral of a gauge invariant local quantity. One property of this QCA is that it creates a specific non-trivial time reversal symmetry protected topological (SPT) phase when acting on a non-trivial tensor product state. Motivated by this\, we construct a general class of QCA in arbitrary dimensions based on time reversal protected SPTs\, and conjecture a general correspondence between unoriented cobordism (which classifies such SPTs) and QCA. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/freedman_102524/
LOCATION:Virtual
CATEGORIES:Freedman Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Freedman-Seminar-10.25.2024.docx-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241028T090000
DTEND;TZID=America/New_York:20241030T170000
DTSTAMP:20260417T053512
CREATED:20240105T032648Z
LAST-MODIFIED:20241106T191859Z
UID:10001111-1730106000-1730307600@cmsa.fas.harvard.edu
SUMMARY:Mathematics and Machine Learning Closing Workshop
DESCRIPTION:Mathematics and Machine Learning Closing Workshop \nDates: October 28 – Oct. 30\, 2024 \nLocation: Room G10\, CMSA\, 20 Garden Street\, Cambridge MA \nThe closing workshop will provide a forum for discussing the most current research in these areas\, including work in progress and recent results from program participants. We will devote one day to frontier topics in interactive theorem proving\, such as mathematical library development and AI for mathematical search and theorem proving. \n  \nYoutube Playlist \n \nOrganizers \n\nFrancois Charton (Meta AI)\nMichael R. Douglas (Harvard CMSA)\nMichael Freedman (Harvard CMSA)\nFabian Ruehle (Northeastern)\nGeordie Williamson (Univ. of Sydney)\n\nSpeakers \n\nAnkit Anand\, Google Deepmind Montreal\nJeremy Avigad\, Carnegie Mellon University\nAngelica Babei\nMatej Balog\, Deepmind\nGergely Bérczi\, Aarhus University\nTristan Buckmaster\, New York University\nGiorgi Butbaia\, University of New Hampshire\nEdgar Costa\, MIT\nAlex Davies\, DeepMind\nBin Dong\, Beijing International Center for Mathematical Research\nKit Fraser-Taliente\, University of Oxford\nJavier Gomez-Serrano\, Brown University\nJim Halverson\, Northeastern University\nThomas Harvey\, MIT\nAmaury Hayat\, Ecole des Ponts Paristech\nYang-Hui He\, City University of London\nJürgen Jost\, Max Planck Institute for Mathematics in the Sciences\nPetros Koumoutsakos\, Harvard University\nKyu-Hwan Lee\, University of Connecticut\nDavid Lowry-Duda\, ICERM\nStephane Mallat\, Flatiron/College de France\nAbbas Mehrabian\, Google Deepmind Montreal\nCengiz Pehlevan\, Harvard University\nFabian Ruehle\, Northeastern University\nEric Vanden-Eijnden\, Courant/NYU\nAdam Wagner\, Worcester Polytechnic Institute\nMelanie Matchett Wood\, Harvard University\n\n  \nSchedule (download PDF) \nMonday Oct. 28\, 2024 \n9:00–9:30 amMorning refreshments \n9:30–9:45 amIntroductions \n9:45–10:45 amJürgen Jost\, Max Planck Institute for Mathematics in the Sciences \nTitle: Data visualization with category theory and geometry \nAbstract: While data often come in a high-dimensional feature space\, they typically exhibit intrinsic constraints and regularities\, and they can therefore often be represented on a lower-dimensional\, but possibly highly curved Riemannian manifold. Still\, for visualization purposes\, that dimension still needs to be lowered to 2 or 3. We present the mathematical foundations for such schemes\, in particular UMAP\, and describe an improved such method. \n10:45–11:00 amBreak \n11:00 am–12:00 pmAnkit Anand\, Google Deepmind Montreal\, Abbas Mehrabian\, Google Deepmind Montreal \nTitle: From Theorem Proving to Disproving: Modern machine learning versus classical heuristic search in automated theorem proving and extremal graph theory \nAbstract: Machine learning is widely believed to outperform classical methods\, but this is not always the case. Firstly\, we describe how we adapted the idea of hindsight experience replay from reinforcement learning to the automated theorem proving domain\, so as to use the intermediate data generated during unsuccessful proofs. We show that provers trained in this way can outperform previous machine learning approaches and compete with the state-of-the-art heuristic-based theorem prover E in its best configuration\, on the popular benchmarks MPTP2078\, M2k and Mizar40. The proofs generated by our algorithm are also almost always significantly shorter than E’s proofs. Based on this paper\, which was presented at ICML 2022: https://proceedings.mlr.press/v162/aygun22a.html. Secondly\, we study a central extremal graph theory problem inspired by a 1975 conjecture of Erdős\, which aims to find graphs with a given size (number of nodes) that maximize the number of edges without having 3- or 4-cycles. We formulate this problem as a sequential decision-making problem and compare AlphaZero\, a neural network-guided tree search\, with tabu search\, a heuristic local search method. Using either method\, by introducing a curriculum—jump-starting the search for larger graphs using good graphs found at smaller sizes—we improve the state-of-the-art lower bounds for several sizes. Joint work with Tudor Berariu\, Joonkyung Lee\, Anurag Murty Naredla\, Adam Zsolt Wagner\, and other colleagues at Google DeepMind. Based on this paper\, which was presented at IJCAI 2024: https://arxiv.org/abs/2311.03583. \n12:00–1:30 pmLunch Break \n1:30–2:30 pmFabian Ruehle\, Northeastern University\, Giorgi Butbaia\, University of New Hampshire \nTitle: Rigorous results  from ML using RL \nAbstract: We explain how to use Reinforcement Learning in Mathematics to obtain provably correct results. After a brief introduction to Reinforcement Learning\, I will illustrate the idea using an example from Number Theory\, where we solve a Diophantine Equation related to String Theory\, and two from Knot Theory. The first knot theory problem is to identify unknots\, while the second is concerned with identifying so-called ribbon knots. The latter play an important role in the search for counter-examples to the smooth Poincare conjecture. \n2:30–2:45 pmBreak \n2:45–3:45 pmCengiz Pehlevan\, Harvard University \nTitle: Solvable Models of Scaling and Emergence in Deep Learning \n3:45–4:00 pmBreak \n4:00–4:30 pmMatej Balog\, Deepmindvia Zoom \nTitle: FunSearch: Mathematical discoveries from program search with large language models \nAbstract: Large language models (LLMs) have demonstrated tremendous capabilities in solving complex tasks\, from quantitative reasoning to understanding natural language. However\, LLMs sometimes suffer from confabulations (or hallucinations)\, which can result in them making plausible but incorrect statements. This hinders the use of current large models in scientific discovery. We introduce FunSearch (short for searching in the function space)\, an evolutionary procedure based on pairing a pretrained LLM with a systematic evaluator. We demonstrate the effectiveness of this approach to surpass the best-known results in important problems\, pushing the boundary of existing LLM-based approaches. Applying FunSearch to a central problem in extremal combinatorics—the cap set problem—we discover new constructions of large cap sets going beyond the best-known ones\, both in finite dimensional and asymptotic cases. This shows that it is possible to make discoveries for established open problems using LLMs. We showcase the generality of FunSearch by applying it to an algorithmic problem\, online bin packing\, finding new heuristics that improve on widely used baselines. In contrast to most computer search approaches\, FunSearch searches for programs that describe how to solve a problem\, rather than what the solution is. Beyond being an effective and scalable strategy\, discovered programs tend to be more interpretable than raw solutions\, enabling feedback loops between domain experts and FunSearch\, and the deployment of such programs in real-world applications. \n4:30–5:00 pmEdgar Costa\, MIT \nTitle: Machine learning L-functions \nAbstract: We report on multiple experiments related to L-functions data. L-functions are complex functions that encode significant information about number theory and algebraic geometry\, playing a crucial part in the Langlands program\, a foundational set of conjectures connecting number theory with other mathematical domains. We focused on two L-function datasets. The first includes about 250k rational L-functions of small arithmetic complexity with diverse origins. Multiple dimensionality reduction techniques were used to analyze invariants and behavioral trends\, focusing on how differing origins impact the results. The second dataset is composed of L-functions associated with Maass forms. Although these L-functions are non-rational\, they also share the low arithmetic complexity of the first dataset. The crux of our investigation here is determining whether this set manifests similar characteristics to the first one. Based on this exploration\, we propose a simple heuristic method to deduce their Fricke sign\, an unknown invariant for 40% of the data. This is joint work with: Joanna Biere\, Giorgi Butbaia\, Alyson Deines\, Kyu-Hwan Lee\, David Lowry-Duda\, Tom Oliver\, Tamara Veenstra\, and Yidi Qi. \n  \n  \nTuesday Oct. 29\, 2024  \n9:15–9:45 amMorning refreshments \n9:45–10:45 amYang-Hui He\, London Institute for Mathematical Sciences Via Zoom \nTitle: AI assisted mathematics \nAbstract: We argue how AI can assist mathematics in three ways: theorem-proving\, conjecture formulation\, and language processing. Inspired by initial experiments in geometry and string theory in 2017\, we summarize how this emerging field has grown over the past years\, and show how various machine-learning algorithms can help with pattern detection across disciplines ranging from algebraic geometry to representation theory\, to combinatorics\, and to number theory.  At the heart of the program is the question how does AI help with theoretical discovery\, and the implications for the future of mathematics. \n10:45–11:00 amBreak \n11:00 –11:30Angelica Babei \nTitle: Learning Euler factors of elliptic curves with transformers \nAbstract: The L-function of an elliptic curve is at the core of the BSD conjecture\, and its Euler factors encode important arithmetic information about the curve. For example\, understanding these Euler factors using machine learning techniques has recently led to discovering the phenomenon of murmurations. In this talk\, we present some results on learning Euler factors based on 1. other nearby factors\, and 2. the Weierstrass equation of the curve.  \n11:30–12:00 pmDavid Lowry-Duda\, ICERM \nTitle: Exploring patterns in number theory with deep learning: a case study with the Möbius and squarefree indicator functionsAbstract: We report on experiments using neural networks and Int2Int\, the integer sequence to integer sequence transformer made by François Charton for this CMSA program. We initially study the Möbius function. This function appears as the coefficients of the reciprocal of the Riemann zeta function and is famously hard to understand. Predicting the Möbius function is closely related to predicting the squarefree indicator function\, leading us to perform similar experiments there. Finally\, we’ll discuss how varying the input representation and model affects the strength of the predictions and allows us to explain most (but not all) of the predictive strength and behavior. \n12:00–1:30 pmLunch \n1:30–2:30 pmAmaury Hayat\, Ecole des Ponts Paristech\, Melanie Matchett Wood\, Harvard University\, Alex Davies\, DeepMind\, Jeremy Avigad\, Carnegie Mellon University \nTitle: Machine learning and theorem proving \nAbstract: Recent successes in machine learning have raised hopes that neural networks could one day assist mathematicians in proving theorems. This raises the question of an appropriate setting to apply machine learning methods to theorem proving. Formal languages\, such as Lean\, provide automatic verification of mathematical proofs and thus offer a natural environment. Nevertheless\, some challenges emerge\, particularly because these languages are often designed to verify correctness rather than find a solution\, while mathematicians often perform reasoning steps to do both at the same time. This talk will present recent applications of machine learning methods to theorem proving in Lean\, highlight current challenges\, and explore what these developments might mean for the future of mathematics. \n  \n2:30–2:45 pmBreak \n2:45–3:45 pmAdam Wagner\, Worcester Polytechnic Institute\, Kit Fraser-Taliente\, University of Oxford\, Gergely Bérczi\, Aarhus University\, Thomas Harvey\, MIT \nTitle: Sparse subgraphs of the d-cube with diameter d \nAbstract: Erdos et al studied spanning subgraphs of the $d$-cube which have the same diameter $d$ as the cube itself. They asked the following natural question: what is the maximum number of edges one can delete from the $d$-dimensional hypercube\, without increasing its diameter? We will discuss how we can use PatternBoost\, a simple machine learning algorithm that alternates local and global optimization steps\, to find good constructions for this problem \n3:45–4:00 pmBreak \n4:00–4:30 pmPetros Koumoutsakos\, Harvard University \n4:30–5:00 pm \nStéphane Mallat\,  Flatiron/College de France \nTitle: Image Generation by Score Diffusion and the Renormalisation Group \nAbstract: Score based diffusions generate impressive models of images\, sounds and complex physical systems. Are they generalising or memorising? How can deep network estimate high-dimensional scores without curse of dimensionality? This talk shows that generalisation does occur for deep network estimation of scores\, with enough training data.  The ability to avoid the curse of dimensionality seems to rely on multiscale properties revealed by a renormalisation group decomposition coming from statistical physics. Applications to models of turbulences will be introduced and discussed. \n  \nWednesday Oct. 30\, 2024 \n9:15–9:45 amMorning refreshments \n9:45–10:45 amBin Dong\, Beijing International Center for Mathematical Research(via Zoom)  \nTitle: AI for Mathematics: From Digitization to Intelligentization \nAbstract: This presentation explores the synergistic relationship between AI and mathematics\, beginning with a brief historical overview of their mutually beneficial interactions. It then examines notable existing work in AI for mathematics\, highlighting their achievements and limitations.  Next\, I will share preliminary findings from the ongoing AI4M research project at Peking University\, including our work on creating high-quality mathematical datasets through formalization (digitization)\, and our future plans for developing intelligent applications using these datasets. The presentation concludes with a forward-looking perspective on the opportunities and challenges within this exciting interdisciplinary field. \n10:45–11:00 am Break \n11:00 am–12:00 pm Eric Vanden-Eijnden\, Courant/NYUvia Zoom \nTitle: Generative modeling with flows and diffusions\, with applications to scientific computing. \nAbstract: Generative models based on dynamical transport have recently led to significant advances in unsupervised learning. At mathematical level\, these models are primarily designed around the construction of a map between two probability distributions that transform samples from the first into samples from the second.  While these methods were first introduced in the context of image generation\, they have found a wide range of applications\, including in scientific computing where they offer interesting ways to reconsider complex problems once thought intractable because of the curse of dimensionality. In this talk\, I will discuss the mathematical underpinning of generative models based on flows and diffusions\, and show how a better understanding of their inner workings can help improve their design. These results indicate how to structure the transport to best reach complex target distributions while maintaining computational efficiency\, both at learning and sampling stages.  I will also discuss applications of generative AI in scientific computing\, in particular in the context of application with models and no data (as opposed to the more standard data andno model)\, such as Monte Carlo sampling\, with applications to the statistical mechanics and Bayesian inference\, as well as the numerical integration and interpretation of random dynamical systems driven out of equilibrium. \n12:00–1:30 pm Lunch \n1:30–2:30 pm Kyu-Hwan Lee\, University of Connecticut \nTitle: Discovering New Mathematical Structures with Machine Learning \nAbstract: Can machine learning help discover new mathematical structures? In this talk\, I will present two case studies: murmurations in number theory and loadings of partitions related to Kronecker coefficients in representation theory and combinatorics. The focus will be on the paradigm of examining mathematical objects collectively\, rather than individually\, to create datasets suitable for machine learning experiments and interpretations. \n2:30–2:45 pm Break \n2:45–3:45 pm James Halverson\, Northeastern University \nTitle: Learning the Topological Invariance of Knots \nAbstract: This talk focuses on using machine learning for the defining problem in knot theory\, the classification of knots up to ambient space isotopy. We will train transformers and convolutional neural networks to distinguish topologically inequivalent knots\, given only representatives of the classes and no a priori knowledge of topological invariants. In this scheme\, we find that equivalent knots are well-clustered in the embedding space of the neural network\, and a trained decoder maps effectively from the embedding space back to knot space. Preliminary results will be presented on a new approach to resolving the Jones unknot conjecture. \n3:45–4:00 pmBreak \n4:00–5:00 pm Tristan Buckmaster\, New York University\, Javier Gomez-Serrano\, Brown Universityvia Zoom \n  \n  \nImage by Sue Side. https://www.sueside.com/\n 
URL:https://cmsa.fas.harvard.edu/event/mmlworkshop_1024/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Workshop
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/ML_Closing-workshop_v3-1.png
END:VEVENT
END:VCALENDAR