BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CMSA - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:CMSA
X-ORIGINAL-URL:https://cmsa.fas.harvard.edu
X-WR-CALDESC:Events for CMSA
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20270314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20271107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260327T150000
DTEND;TZID=America/New_York:20260327T171500
DTSTAMP:20260415T164120
CREATED:20260323T145751Z
LAST-MODIFIED:20260323T194752Z
UID:10003922-1774623600-1774631700@cmsa.fas.harvard.edu
SUMMARY:Exotic R^4's are unclassifiable
DESCRIPTION:Freedman Seminar \nSpeaker: Robert Gompf\, UT Austin \nTitle: Exotic R^4’s are unclassifiable \nAbstract: We will use descriptive set theory to show that there is a precise sense in which exotic R^4’s are unclassifiable. For other open manifolds\, we can reach a much higher level of unclassifiability. This is work in progress with Aristotelis Panagiotopoulos.
URL:https://cmsa.fas.harvard.edu/event/freedman_32726/
LOCATION:Virtual
CATEGORIES:Freedman Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Freedman-Seminar-3.27.26-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260306T150000
DTEND;TZID=America/New_York:20260306T171500
DTSTAMP:20260415T164120
CREATED:20260205T145433Z
LAST-MODIFIED:20260225T152603Z
UID:10003889-1772809200-1772817300@cmsa.fas.harvard.edu
SUMMARY:Freedman Seminar: Mattie Ji\, Penn and Jeongwan Haah\, Stanford
DESCRIPTION:Freedman Seminar \nSpeakers: Mattie Ji (Penn) and Jeongwan Haah (Stanford) \nMattie Ji Title: Quantum Cellular Automata via Algebraic K-Theory \nAbstract: Algebraic K-theory\, on a very high level\, is the study of how to break apart and assemble objects linearly\, which makes the field amenable to classification questions. In this work\, we apply this methodology to study the classification of quantum cellular automata (QCA). Over an arbitrary commutative ring R and a general class of metric spaces X\, we construct a space of QCA that depends only on the large-scale (coarse) geometry of X. We explain how QCA classification groups (QCA modulo circuits) either arise naturally as or are refined by this space in most cases of interest. \nMotivated by negative K-theory\, we also show the classification of QCA on Euclidean lattices is given by an $\Omega$-spectrum indexed by the dimension. As a corollary\, we also obtain a non-connective delooping of the K-theory of Azumaya R-algebras\, whose negative homotopy groups are the QCA classification groups. When R is the complex numbers\, our method can be adapted to yield an $\Omega$-spectrum for QCA of $C^*$-algebras with unitary circuits. This talk is based on joint work with Bowen Yang. \n  \nJeongwan Haah Title: Fermionic QCA in 2d are trivial \nAbstract: We consider bounded spread automorphisms of Z/2-graded algebra (fermionic QCA) on the two-dimensional lattice and prove that every fQCA is a unitary circuit followed by fermionic shifts when stabilized by Majorana modes. This is an analog of a theorem by Freedman and Hastings for the case of ungraded algebras. The overall argument follows a similar line in that we show invertible subalgebras in 1d is trivial\, but the stabilization is used crucially. By an existing argument\, this triviality of fQCA in 2d implies that the 3d (bosonic) QCA that disentangles the Walker-Wang model with three-fermion theory is nontrivial. The latter was known to be nontrivial against Clifford gates but remained conjectural against more general unitary gates. To my knowledge\, this gives the only example ungraded QCA that is proved to be nontrivial against general unitary circuits and shifts\, and the only example ungraded invertible subalgebra that is not isomorphic to any tensor product algebra. I will explain elements new to the fermionic setting and give an overview of the nontriviality argument. (Based on an upcoming work with Jeffrey Kwan and David Long)
URL:https://cmsa.fas.harvard.edu/event/freedman_3626/
LOCATION:Hybrid
CATEGORIES:Freedman Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Freedman-Seminar-3.6.26-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260130T133000
DTEND;TZID=America/New_York:20260130T163000
DTSTAMP:20260415T164120
CREATED:20260108T211634Z
LAST-MODIFIED:20260122T164709Z
UID:10003870-1769779800-1769790600@cmsa.fas.harvard.edu
SUMMARY:Freedman Seminar: Michael Freedman\, CMSA & Slava Krushkal\, University of Virginia
DESCRIPTION:Freedman Seminar \nSpeakers: Michael Freedman\, CMSA and Slava Krushkal\, University of Virginia (2-3 pm and 3:15-4:15 pm) \nTitle: Formulating 4D surgery for AI agents \nAbstract: The topological category surgery exact sequence is still open for free groups (and most groups of exponential growth). The lack of knowledge is about both surgery and s-cobordism; and the source of the mystery is the same in both cases. Thinking about how to present this problem to AIs has had its own value. In a pair of talks we will explain how we have thought about the problem in the past and how we are thinking about it now.
URL:https://cmsa.fas.harvard.edu/event/freedman_13026/
LOCATION:Hybrid
CATEGORIES:Freedman Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Freedman-Seminar-1.30.26-1-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251114T130000
DTEND;TZID=America/New_York:20251114T160000
DTSTAMP:20260415T164120
CREATED:20251104T215810Z
LAST-MODIFIED:20251105T144505Z
UID:10003832-1763125200-1763136000@cmsa.fas.harvard.edu
SUMMARY:Freedman Seminar: Michael Freedman\, CMSA & Bowen Yang\, CMSA
DESCRIPTION:Freedman Seminar \nSpeaker: Michael Freedman\, Harvard CMSA \nTitle: Sullivan’s work on Lipschitz structures Part II (but self-contained) \n  \nSpeaker: Bowen Yang\, CMSA \nTitle: Deligne and Sullivan’s work on complex bundles with discrete structure group \n 
URL:https://cmsa.fas.harvard.edu/event/freedman_111425/
LOCATION:Virtual
CATEGORIES:Freedman Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Freedman-Seminar-11.14.25.docx-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251017T130000
DTEND;TZID=America/New_York:20251017T160000
DTSTAMP:20260415T164120
CREATED:20250930T134721Z
LAST-MODIFIED:20251014T133421Z
UID:10003800-1760706000-1760716800@cmsa.fas.harvard.edu
SUMMARY:Freedman Seminar: Michael Freedman\, CMSA & Bowen Yang\, CMSA
DESCRIPTION:Freedman Seminar \nSpeaker: Michael Freedman\, Harvard CMSA \nTitle: Sullivan’s work on Lipschitz structures \nAbstract: I’ll begin with an elementary\, but now little known\, piece of PL topology: engulfing. John Stalling used it to give an alternative proof of the high dimensional Poincare conjecture. Then I’ll explain Dennis Sullivan’s enhancement of Kirby’s torus trick (which relies on engulfing.) I’ll note an open question regarding Lipschitz structures on 4-manifolds. \n  \nSpeaker: Bowen Yang\, CMSA \nTitle: Quantum Cellular Automata and Algebraic L-Theory \nAbstract: Quantum cellular automata (QCAs) are models of reversible quantum dynamics that preserve locality; they can be thought of as quantum analogues of classical cellular automata\, but with much richer structure. I will describe a classification of the Clifford subclass of QCAs using methods from algebraic L-theory. The main result identifies the group of Clifford QCAs\, up to natural equivalences\, with L-theory homology of the underlying space. This gives a conceptual explanation of previously observed periodic patterns in lattice models and extends the picture to more general spaces. I will outline the ideas behind the construction and indicate how the framework connects topology\, operator algebras\, and quantum information. If time permits\, I will also comment on what is known — and unknown — about the general (non-Clifford) case.
URL:https://cmsa.fas.harvard.edu/event/freedman_101725/
LOCATION:Virtual
CATEGORIES:Freedman Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Freedman-Seminar-10.17.25.docx-1-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250915T090000
DTEND;TZID=America/New_York:20250918T170000
DTSTAMP:20260415T164120
CREATED:20250710T134311Z
LAST-MODIFIED:20250930T154307Z
UID:10003755-1757926800-1758214800@cmsa.fas.harvard.edu
SUMMARY:The Geometry of Machine Learning
DESCRIPTION:The Geometry of Machine Learning \nDates: September 15–18\, 2025 \nLocation: Harvard CMSA\, Room G10\, 20 Garden Street\, Cambridge MA 02138 \nDespite the extraordinary progress in large language models\, mathematicians suspect that other dimensions of intelligence must be defined and simulated to complete the picture. Geometric and symbolic reasoning are among these. In fact\, there seems to be much to learn about existing ML by considering it from a geometric perspective\, e.g. what is happening to the data manifold as it moves through a NN?  How can geometric and symbolic tools be interfaced with LLMs? A more distant goal\, one that seems only approachable through AIs\, would be to gain some insight into the large-scale structure of mathematics as a whole: the geometry of math\, rather than geometry as a subject within math. This conference is intended to begin a discussion on these topics. \nSpeakers \n\nMaissam Barkeshli\, University of Maryland\nEve Bodnia\, Logical Intelligence\nAdam Brown\, Stanford\nBennett Chow\, USCD & IAS\nMichael Freedman\, Harvard CMSA\nElliot Glazer\, Epoch AI\nJames Halverson\, Northeastern\nJesse Han\, Math Inc.\nJunehyuk Jung\, Brown University\nAlex Kontorovich\, Rutgers University\nYann Lecun\, New York University & META*\nJared Duker Lichtman\, Stanford  & Math Inc.\nBrice Ménard\, Johns Hopkins\nMichael Mulligan\, UCR & Logical Intelligence\nPatrick Shafto\, DARPA & Rutgers University\n\nOrganizers: Michael R. Douglas (CMSA) and Mike Freedman (CMSA) \n  \nGeometry of Machine Learning Youtube Playlist \n  \nSchedule \nMonday\, Sep. 15\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nJames Halverson\, Northeastern \nTitle: Sparsity and Symbols with Kolmogorov-Arnold Networks \nAbstract: In this talk I’ll review Kolmogorov-Arnold nets\, as well as new theory and applications related to sparsity and symbolic regression\, respectively.  I’ll review essential results regarding KANs\, show how sparsity masks relate deep nets and KANs\, and how KANs can be utilized alongside multimodal language models for symbolic regression. Empirical results will necessitate a few slides\, but the bulk will be chalk.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nMaissam Barkeshli\, University of Maryland \nTitle: Transformers and random walks: from language to random graphs \nAbstract: The stunning capabilities of large language models give rise to many questions about how they work and how much more capable they can possibly get. One way to gain additional insight is via synthetic models of data with tunable complexity\, which can capture the basic relevant structures of real data. In recent work we have focused on sequences obtained from random walks on graphs\, hypergraphs\, and hierarchical graphical structures. I will present some recent empirical results for work in progress regarding how transformers learn sequences arising from random walks on graphs. The focus will be on neural scaling laws\, unexpected temperature-dependent effects\, and sample complexity.\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–1:00 pm\nAdam Brown\, Stanford \nTitle: LLMs\, Reasoning\, and the Future of Mathematical Sciences \nAbstract: Over the last half decade\, the mathematical capabilities of large language models (LLMs) have leapt from preschooler to undergraduate and now beyond. This talk reviews recent progress\, and speculates as to what it will mean for the future of mathematical sciences if these trends continue.\n\n\n\n  \nTuesday\, Sep. 16\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nJunehyuk Jung\, Brown University \nTitle: AlphaGeometry: a step toward automated math reasoning \nAbstract: Last summer\, Google DeepMind’s AI systems made headlines by achieving Silver Medal level performance on the notoriously challenging International Mathematical Olympiad (IMO) problems. For instance\, AlphaGeometry 2\, one of these remarkable systems\, solved the geometry problem in a mere 19 seconds! \nIn this talk\, we will delve into the inner workings of AlphaGeometry\, exploring the innovative techniques that enable it to tackle intricate geometric puzzles. We will uncover how this AI system combines the power of neural networks with symbolic reasoning to discover elegant solutions.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nBennett Chow\, USCD and IAS \nTitle: Ricci flow as a test for AI\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–1:00 pm\nJared Duker Lichtman\, Stanford & Math Inc. and Jesse Han\, Math Inc. \nTitle: Gauss – towards autoformalization for the working mathematician \nAbstract: In this talk we’ll highlight some recent formalization progress using a new agent – Gauss. We’ll outline a recent Lean proof of the Prime Number Theorem in strong form\, completing a challenge set in January 2024 by Alex Kontorovich and Terry Tao. We hope Gauss will help assist working mathematicians\, especially those who do not write formal code themselves.\n\n\n5:00–6:00 pm\nSpecial Lecture: Yann LeCun\, Science Center Hall C\n\n\n\n  \nWednesday\, Sep. 17\, 2025 \n\n\n\n8:30–9:00 am\nRefreshments\n\n\n9:00–10:00 am\nMichael Mulligan\, UCR and Logical Intelligence \nTitle: Spontaneous Kolmogorov-Arnold Geometry in Vanilla Fully-Connected Neural Networks \nAbstract: The Kolmogorov-Arnold (KA) representation theorem constructs universal\, but highly non-smooth inner functions (the first layer map) in a single (non-linear) hidden layer neural network. Such universal functions have a distinctive local geometry\, a “texture\,” which can be characterized by the inner function’s Jacobian\, $J(\mathbf{x})$\, as $\mathbf{x}$ varies over the data. It is natural to ask if this distinctive KA geometry emerges through conventional neural network optimization. We find that indeed KA geometry often does emerge through the process of training vanilla single hidden layer fully-connected neural networks (MLPs). We quantify KA geometry through the statistical properties of the exterior powers of $J(\mathbf{x})$: number of zero rows and various observables for the minor statistics of $J(\mathbf{x})$\, which measure the scale and axis alignment of $J(\mathbf{x})$. This leads to a rough phase diagram in the space of function complexity and model hyperparameters where KA geometry occurs. The motivation is first to understand how neural networks organically learn to prepare input data for later downstream processing and\, second\, to learn enough about the emergence of KA geometry to accelerate learning through a timely intervention in network hyperparameters. This research is the “flip side” of KA-Networks (KANs). We do not engineer KA into the neural network\, but rather watch KA emerge in shallow MLPs.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nEve Bodnia\, Logical Intelligence \nTitle: \nAbstract: We introduce a method of topological analysis on spiking correlation networks in neurological systems. This method explores the neural manifold as in the manifold hypothesis\, which posits that information is often represented by a lower-dimensional manifold embedded in a higher-dimensional space. After collecting neuron activity from human and mouse organoids using a micro-electrode array\, we extract connectivity using pairwise spike-timing time correlations\, which are optimized for time delays introduced by synaptic delays. We then look at network topology to identify emergent structures and compare the results to two randomized models – constrained randomization and bootstrapping across datasets. In histograms of the persistence of topological features\, we see that the features from the original dataset consistently exceed the variability of the null distributions\, suggesting that the observed topological features reflect significant correlation patterns in the data rather than random fluctuations. In a study of network resiliency\, we found that random removal of 10 % of nodes still yielded a network with a lesser but still significant number of topological features in the homology group H1 (counts 2-dimensional voids in the dataset) above the variability of our constrained randomization model; however\, targeted removal of nodes in H1 features resulted in rapid topological collapse\, indicating that the H1 cycles in these brain organoid networks are fragile and highly sensitive to perturbations. By applying topological analysis to neural data\, we offer a new complementary framework to standard methods for understanding information processing across a variety of complex neural systems.\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–1:00 pm\nAlex Kontorovich\, Rutgers University \nTitle: The Shape of Math to Come \nAbstract: We will discuss some ongoing experiments that may have meaningful impact on what working in research mathematics might look like in a decade (if not sooner).\n\n\n5:00–6:00 pm\nMike Freedman Millennium Lecture: The Poincaré Conjecture and Mathematical Discovery (Science Center Hall D)\n\n\n\n  \nThursday\, Sep. 18\, 2025 \n\n\n\n8:30–9:00 am\nMorning refreshments\n\n\n9:00–10:00 am\nElliott Glazer\, Epoch AI \nTitle: FrontierMath to Infinity \nAbstract: I will discuss FrontierMath\, a mathematical problem solving benchmark I developed over the past year\, including its design philosophy and what we’ve learned about AI’s trajectory from it. I will then look much further out\, speculate about what a “perfectly efficient” mathematical intelligence should be capable of\, and discuss how high-ceiling math capability metrics can illuminate the path towards that ideal.\n\n\n10:00–10:30 am\nBreak\n\n\n10:30–11:30 am\nBrice Ménard\, Johns Hopkins \nTitle:Demystifying the over-parametrization of neural networks \nAbstract: I will show how to estimate the dimensionality of neural encodings (learned weight structures) to assess how many parameters are effectively used by a neural network. I will then show how their scaling properties provide us with fundamental exponents on the learning process of a given task. I will comment on connections to thermodynamics.\n\n\n11:30 am–12:00 pm\nBreak\n\n\n12:00–12:30 pm\nPatrick Shafto\, Rutgers \nTitle: Math for AI and AI for Math \nAbstract: I will briefly discuss two DARPA programs aiming to deepen connections between mathematics and AI\, specifically through geometric and symbolic perspectives. The first aims for mathematical foundations for understanding the behavior and performance of modern AI systems such as Large Language Models and Diffusion models. The second aims to develop AI for pure mathematics through an understanding of abstraction\, decomposition\, and formalization. I will close with some thoughts on the coming convergence between AI and math.\n\n\n12:30–12:45 pm\nBreak\n\n\n12:45–2:00 pm\nMike Freedman\, Harvard CMSA \nTitle: How to think about the shape of mathematics \nFollowed by group discussion \n \n\n\n\n  \n  \n  \nSupport provided by Logical Intelligence. \n \n  \n 
URL:https://cmsa.fas.harvard.edu/event/mlgeometry/
LOCATION:CMSA 20 Garden Street Cambridge\, Massachusetts 02138 United States
CATEGORIES:Conference,Event
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/GML_2025.7-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250425T030000
DTEND;TZID=America/New_York:20250425T160000
DTSTAMP:20260415T164120
CREATED:20250422T134510Z
LAST-MODIFIED:20250422T140503Z
UID:10003713-1745550000-1745596800@cmsa.fas.harvard.edu
SUMMARY:Adversarial KA
DESCRIPTION:Freedman CMSA Seminar \nSpeaker: Slava Dzhenzher\, MIPT \nTitle: Adversarial KA \nAbstract: Regarding the representation theorem of Kolmogorov and Arnold (KA) as an algorithm for representing or «expressing» functions\, we test its robustness by analyzing its ability to withstand adversarial attacks. We find KA to be robust to countable collections of continuous adversaries\, but unearth a question about the equi-continuity of the outer functions that\, so far\, obstructs taking limits and defeating continuous groups of adversaries. This question on the regularity of the outer functions is relevant to the debate over the applicability of KA to the general theory of NNs. Based on  https://arxiv.org/abs/2504.05255 \n  \n 
URL:https://cmsa.fas.harvard.edu/event/freedman_42525/
LOCATION:Virtual
CATEGORIES:Freedman Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Freedman-Seminar-4.25.25.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250312T150000
DTEND;TZID=America/New_York:20250312T170000
DTSTAMP:20260415T164120
CREATED:20250210T183743Z
LAST-MODIFIED:20250307T175626Z
UID:10003711-1741791600-1741798800@cmsa.fas.harvard.edu
SUMMARY:Freedman CMSA Seminar: Michael Freedman (CMSA) & Elia Portnoy (MIT)
DESCRIPTION:Freedman CMSA Seminar \nSpeaker: Michael Freedman\, Harvard CMSA (3:00–4:00 pm ET) \nTitle: How many links can you fit in a box? \nAbstract: I’ll discuss a “made up” problem on the interface of topology and packing\, which may well be classified as “recreational math”.  Here is the first question suppose you have a unit box\, how many unlinked (split) copies of the Hopf link (c_1\,i\,c_2\,i) and be embedded so that for each copy the two components c_1\,i and c_2\,i maintain a distance of at least  some fixed \epsilon >0. Is this number even finite? \n  \nSpeaker: Elia Portnoy\, MIT (4:00–5:00 pm ET) \nTitle: An explicit packing of links in a box and some progress in quantitative embeddings \nAbstract: Following Freedman’s talk\, I’ll begin by showing how to pack a large number of links in a box with certain geometric and topological constraints (joint with Fedya Manin). If time permits\, I’ll also discuss some progress and open questions for the following quantitative embedding problem: given a simplicial complex X\, what is the smallest size of a map from X to R^n so that the preimage of each unit ball intersects a small constant number of simplices? \n 
URL:https://cmsa.fas.harvard.edu/event/freedman_31225/
LOCATION:Hybrid – G10
CATEGORIES:Freedman Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Freedman-Seminar-3.12.25.docx-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250207T140000
DTEND;TZID=America/New_York:20250207T154500
DTSTAMP:20260415T164120
CREATED:20250127T151529Z
LAST-MODIFIED:20250127T155730Z
UID:10003673-1738936800-1738943100@cmsa.fas.harvard.edu
SUMMARY:Is every knot isotopic to the unknot?
DESCRIPTION:Freedman CMSA Seminar \n*via Zoom* \nSpeaker: Sergey Melikhov\, Steklov Math Institute \nTitle: Is every knot isotopic to the unknot? \nAbstract: The following problem was stated by D. Rolfsen in his 1974 paper; according to R. Daverman it was being discussed since the mid-60s. Is every knot in $S^3$ isotopic (=homotopic through embeddings) to a PL knot — or\, equivalently\, to the unknot? In particular\, is the Bing sling isotopic to a PL knot? We show that the Bing sling $B$ is not isotopic to any PL knot by an isotopy which extends to an isotopy of any 2-component link obtained from $B$ by adding a disjoint component $Q$ such that $lk(B\,Q)=1$. Moreover\, the assertion remains true if the additional component is allowed to self-intersect\, and even to get replaced by a new one at any time instant $t$\, as long as it remains disjoint from the original component $K_t$ and represents the same conjugacy class as the old one in $G/[G’\,G”]$\, where $G=\pi_1(S^3\setminus K_t)$. The are examples showing that the latter result cannot be improved in certain ways. I plan to present a sketch of the proof\, modulo some ingredients. The details can be found in arXiv:2406.09365 and the main ingredients in arXiv:2406.09331 and arXiv:math/0312007v3. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/freedman_2725/
LOCATION:Virtual
CATEGORIES:Freedman Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Freedman-Seminar-2.7.25.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241206T160000
DTEND;TZID=America/New_York:20241206T170000
DTSTAMP:20260415T164120
CREATED:20240923T164849Z
LAST-MODIFIED:20241202T185723Z
UID:10003603-1733500800-1733504400@cmsa.fas.harvard.edu
SUMMARY:A simple model for universal quantum computation
DESCRIPTION:Freedman CMSA Seminar \nSpeaker: Michael Freedman \nTitle: A simple model for universal quantum computation \nAbstract: I’ll present joint (unpublished) work with Charlie Marcus on a surprisingly simple – and potentially practical (?)– model for universal quantum computation whose only quantum primitive is the ability to measure a pair of adjacent electrons into either singlet (spin=0) or triplet (spin=1) sectors according to the Born rule. The electrons are located on quantum dots arranged in a triangular lattice whose edges are tiny strips of s-wave superconductor. \n 
URL:https://cmsa.fas.harvard.edu/event/freedman_12624/
LOCATION:Virtual
CATEGORIES:Freedman Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Freedman-Seminar-12.06.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241115T143000
DTEND;TZID=America/New_York:20241115T173000
DTSTAMP:20260415T164120
CREATED:20240923T164810Z
LAST-MODIFIED:20241112T153736Z
UID:10003602-1731681000-1731691800@cmsa.fas.harvard.edu
SUMMARY:Freedman CMSA Seminar
DESCRIPTION:Freedman CMSA Seminar \n*Note: via Zoom only* \n  \n2:00-3:30 pm ET \nSpeaker: Michael Freedman\, Harvard CMSA \nTitle: Some questions and theorems about closed 3 manifolds embedded in S^4 \nAbstract: Much is unknown about smooth embeddings of 3-manifolds in S^4; the Schoenflies problem  (Is there only one smoothly embedded 3-sphere in S^4 up to isotopy?) is the best-known example. There has long been a hope that 3-manifold reasoning applied to level-sets will be helpful.  I’ll mention some successes and failures of this method and revisit a classical theorem of Hantzsche in this light. (Hantzsche: If a 3-manifold embeds in S^4 its linking form is hyperbolic.) \n  \n3:30-4:00 pm ET \nBreak/Discussion \n  \n4:00-5:30 pm ET \nSpeaker: Slava Krushkal\, University of Virginia \nTitle: A higher order torsion linking form for 3-manifolds \nAbstract: This talk is based on a joint work with Mike Freedman defining a triple linking form for rational homology spheres\, assuming that the classical torsion linking pairing of three classes pairwise vanishes. I will discuss its vanishing for 3-manifolds in S^4\, and its relation to the Matsumoto triple intersection form on 4-manifolds. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/freedman_11824/
LOCATION:Virtual
CATEGORIES:Freedman Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Freedman-Seminar-11.15.2024.docx-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241025T143000
DTEND;TZID=America/New_York:20241025T173000
DTSTAMP:20260415T164120
CREATED:20240907T191539Z
LAST-MODIFIED:20241010T152044Z
UID:10003466-1729866600-1729877400@cmsa.fas.harvard.edu
SUMMARY:Freedman CMSA Seminar
DESCRIPTION:Freedman CMSA Seminar \n*Note: via Zoom only* \n2:00-3:30 pm ET \nSpeaker: Matt Hastings\, Microsoft Quantum Program \nTitle: Invertible Phases of Matter and Quantum Cellular Automata: Dimensions One to Three \nAbstract: A Quantum Cellular Automaton (QCA) is a *-automorphism of the algebra of local operators. While local quantum circuits provide one example of QCA\, we are most interested in nontrivial QCA which are those which cannot be written as conjugation by a local quantum circuit. For systems in one and two spatial dimensions\, all nontrivial QCA are shifts (i.e.\, translations by some amount)\, up to conjugation by a quantum circuit\, but in three and higher dimensions\, other examples are known. I’ll explain the relation between QCA and a certain “boundary algebra” of operators in one lower spatial dimension\, and also the relation to invertible phases of matter on the boundary\, and use this to explain and motivate some of these results in dimensions one through three. \n  \n3:30-4:00 pm ET \nBreak/Discussion \n  \n4:00-5:30 pm ET \nSpeaker: Lukasz Fidkowski\, U Washington\, Physics \nTitle: Invertible Phases of Matter and Quantum Cellular Automata: Higher dimensions \nAbstract: We discuss the explicit construction of a non-trivial QCA in 3 dimensions\, one which takes the form of multiplication by a discrete Chern-Simons functional in an appropriate basis for the Hilbert space. We relate the non-trivialness of the QCA to the fact that the Chern-Simons action is not the integral of a gauge invariant local quantity. One property of this QCA is that it creates a specific non-trivial time reversal symmetry protected topological (SPT) phase when acting on a non-trivial tensor product state. Motivated by this\, we construct a general class of QCA in arbitrary dimensions based on time reversal protected SPTs\, and conjecture a general correspondence between unoriented cobordism (which classifies such SPTs) and QCA. \n  \n 
URL:https://cmsa.fas.harvard.edu/event/freedman_102524/
LOCATION:Virtual
CATEGORIES:Freedman Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Freedman-Seminar-10.25.2024.docx-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240913T143000
DTEND;TZID=America/New_York:20240913T170000
DTSTAMP:20260415T164120
CREATED:20240723T202450Z
LAST-MODIFIED:20240911T134726Z
UID:10003401-1726237800-1726246800@cmsa.fas.harvard.edu
SUMMARY:Freedman CMSA Seminar
DESCRIPTION:Freedman CMSA Seminar \n  \n2:00-3:30 pm ET \nSpeaker: Mike Freedman\, Harvard CMSA \nTitle: Detecting hidden structures in linear maps \nAbstract: I’ll consider the problem of detecting spectral features and tensor structures within linear maps both in a quantum and classical contexts. In the quantum context there is the question of whether a Hamiltonian is local\, and if so\, local in distinct coordinate systems (a “duality”). Also\, in the case of a unitary described by a quantum circuit\, does it possess unusual spectral features or tensor structure? In ML one optimizes many linear maps. How would we know – and would we care – if the resulting maps (approximately) tensor factored? \n  \n3:30-4:00 pm ET \nBreak/Discussion \n  \n4:00-5:30 pm ET \nSpeaker: Ryan O’Donnell\, Carnegie Mellon University \nTitle: Quartic quantum speedups for planted inference \nAbstract: Consider the following task (“noisy 4XOR”)\, arising in CSPs\, optimization\, and cryptography. There is a ‘secret’ Boolean vector x in {-1\,+1}^n. One gets m randomly chosen pairs (S\, b)\, where S is a set of 4 coordinates from [n] and b is x^S := prod_{i in S} x_i with probability 1-eps\, and -x^S with probability eps. Can you tell the difference between the cases eps = 0.1 and eps = 0.5? \nIt depends on m. The best known algorithms use the “Kikuchi method” and run in time ~n^L when m ~ n^2/L. We will review this method\, and also show that the running time can be improved to roughly n^{L/4} with a quantum algorithm. \nJoint work with Alexander Schmidhuber (MIT)\, Robin Kothari (Google)\, and Ryan Babbush (Google).
URL:https://cmsa.fas.harvard.edu/event/freedman_91324/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Freedman Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Freedman-Seminar-09.13.2024.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240903T090000
DTEND;TZID=America/New_York:20241101T170000
DTSTAMP:20260415T164120
CREATED:20240105T033600Z
LAST-MODIFIED:20250305T175957Z
UID:10001112-1725354000-1730480400@cmsa.fas.harvard.edu
SUMMARY:Mathematics and Machine Learning Program
DESCRIPTION:Mathematics and Machine Learning Program \nDates: September 3 – November 1\, 2024 \nLocation: Harvard CMSA\, 20 Garden Street\, Cambridge\, MA 0213 \nMachine learning and AI are increasingly important tools in all fields of research. Recent milestones in machine learning for mathematics include data-driven discovery of theorems in knot theory and representation theory\, the discovery and proof of new singular solutions of the Euler equations\, new counterexamples and lower bounds in graph theory\, and more. Rigorous numerical methods and interactive theorem proving are playing an important part in obtaining these results. Conversely\, much of the spectacular progress in AI has a surprising simplicity at its core. Surely there are remarkable mathematical structures behind this\, yet to be elucidated. \nThe program will begin and end with two week-long workshops\, and will feature focus weeks on number theory\, knot theory\, graph theory\, rigorous numerics in PDE\, and interactive theorem proving\, as well as a course on geometric aspects of deep learning.\n\n  \nSeptember 3–5\, 2024: Opening Workshop: AI for Mathematicians\, with Leon Bottou\, François Charton\, David McAllester\, Adam Wagner and Geordie Williamson.   A series of six lectures covering logic and theorem proving\, AI methods\, theory of machine learning\, two lectures on case studies in math-AI\, and a lecture and discussion on open problems and the ethics of AI in science.\nOpening Workshop Youtube Playlist \n\nSeptember 6–7\, 2024: Big Data Conference \n  \nSeptember 9–13\, 2024: Applying Machine Learning to Math\, with François Charton and Geordie Williamson\nPublic Lecture September 12\, 2024: Geordie Williamson\, University of Sydney: Can AI help with hard mathematics? (Youtube link)\nThe focus of this week will be on practical examples and techniques for the mathematics researcher keen to explore or deepen their use of AI techniques. We will have talks showcasing easily stated problems\, on which machine learning techniques can be employed profitably. These provide excellent toy examples for generating intuition. We will also have expert talks on some of the technical subtleties which arise. There are several instances where the accepted heuristics emerging from the study of large language models (LLM) and image recognition don’t appear to apply on mathematics problems\, and we will try to highlight these subtleties.\nApplying Machine Learning to Math Youtube Playlist \n  \nSeptember 16–20\, 2024: Number theory\, with Drew Sutherland\nThe focus of this week will be on the use of ML as a tool for finding and understanding statistical patterns in number-theoretic datasets\, using the recently discovered (and still largely unexplained) “murmurations” in the distribution of Frobenius traces in families of elliptic curves and other arithmetic L-functions as a motivating example.\nNumber Theory Youtube Playlist \n  \nSeptember 23–27\, 2024: Knot theory\, with Sergei Gukov\nKnot theory is a great source of labeled data that can be synthetically generated. Moreover\, many outstanding problems in knot theory and low-dimensional topology can be formulated as decision and classification tasks\, e.g. “Is the knot 123_45 slice?” or “Can two given Kirby diagrams be related by a sequence of Kirby moves?” During this focus week we will explore various ways in which AI can be applied to problems in knot theory and how\, based on these applications\, mathematical reasoning can advance development of AI algorithms. Another goal will be to develop formal knot theory libraries (e.g. contributions to mathlib) and to apply AI models to formal proof systems\, in particular in the context of knot theory.\nKnot Theory Youtube Playlist \n  \nSeptember 30: Teaching and Machine Learning Panel Discussion\, 3:30-5:30 pm ET \n  \nSeptember 30–October 4\, 2024: Graph theory and combinatorics\, with Adam Wagner\nThis week\, we will consider how machine learning can help us solve problems in combinatorics and graph theory\, broadly interpreted\, in practice. The advantage of these fields is that they deal with finite objects that are simple to set up using computers\, and programs that work for one problem can often be adapted to work for several other related problems as well. Many times\, the best constructions for a problem are easy to interpret\, making it simpler to judge how well a particular algorithm is performing. On the other hand\, there are lots of open conjectures that are simple to state\, for which the best-known constructions are counterintuitive\, making it perhaps more likely that machine learning methods can spot patterns that are difficult to understand otherwise.\nGraph Theory and Combinatorics Youtube Playlist \n  \nOctober 7–11\, 2024: More number theory\, with Drew Sutherland\nThe focus of this week will be on the use of AI as a tool to search for and/or construct interesting or extremal examples in number theory and arithmetic geometry\, using LLM-based genetic algorithms\, generative adversarial networks\, game-theoretic methods\, and heuristic tree pruning as alternatives to conventional local search strategies.\nMore Number Theory Youtube Playlist \n  \nOctober 14 –18\, 2024: Interactive theorem proving\nThis week we will discuss the use of interactive theorem proving systems such as Lean\, Coq and Isabelle in mathematical research\, and AI systems which prove theorems and translate between informal and formal mathematics.\nInteractive Theorem Proving Youtube Playlist \n  \nOctober 21–25\, 2024: Numerical Partial Differential Equations (PDE)\, with Tristan Buckmaster and Javier Gomez-Serrano\nThe focus of this week will be on constructing solutions to partial differential equations and dynamical systems (finite and infinite dimensional) more broadly defined. We will discuss several toy problems and comment on issues like sampling strategies\, optimization algorithms\, ill-posedness\, or convergence. We will also outline strategies about further developing machine-learning findings and turn them into mathematical theorems via computer-assisted approaches.\nNumerical PDEs Youtube Playlist \n  \nOctober 28–Nov. 1\, 2024: Closing Workshop: The closing workshop will provide a forum for discussing the most current research in these areas\, including work in progress and recent results from program participants.\nMath and Machine Learning Closing Workshop Youtube Playlist \n  \nSeptember 3–Nov. 1: Graduate topics in deep learning theory (Boston College) taught by Eli Grigsby\, held at the CMSA Tuesdays and Thursdays 2:30–3:45 pm Eastern Time. Course website (link).\nGraduate Topics in Deep Learning Youtube Playlist \nCourse description: This is a course on geometric aspects of deep learning theory. Broadly speaking\, we’ll investigate the question: How might human-interpretable concepts be expressed in the geometry of their data encodings\, and how does this geometry interact with the computational units and higher-level algebraic structures in various parameterized function classes\, especially neural network classes? During the portion of the course Sep. 3-Nov. 1\, the course will be presented as part of the Math and Machine Learning program at the CMSA in Cambridge. During that portion\, we will focus on the current state of research on mechanistic interpretability of transformers\, the architecture underlying large language models like Chat-GPT. \n\n\n\n\nPrerequisites: This course is targeted to graduate students and advanced undergraduates in mathematics and theoretical computer science. No prior background in machine learning or learning theory will be assumed\, but I will assume a degree of mathematical maturity (at the level of–say—the standard undergraduate math curriculum+ first-year graduate geometry/topology sequence)\n\n\n\n\n\nProgram Organizers \n\nFrancois Charton (Meta AI)\nMichael R. Douglas (Harvard CMSA)\nMichael Freedman (Harvard CMSA)\nFabian Ruehle (Northeastern)\nGeordie Williamson (Univ. of Sydney)\n\n\nProgram Schedule  \nMonday\n10:30–noon\nOpen Discussion\nRoom G10 \n12:00–1:30 pm\nGroup lunch\nCMSA Common Room \nTuesday\n2:30–3:45 pm\nTopics in deep learning theory\nRoom G10 \n4:00–5:00 pm\nOpen Discussion/Tea\nCMSA Common Room \nWednesday\n10:30 am–12:00 pm\nOpen Discussion\nRoom G10 \n2:00–3:00 pm\nNew Technologies in Mathematics Seminar\nRoom G10 \nThursday\n2:30–3:45 pm\nTopics in deep learning theory\nRoom G10 \nFriday\n10:30 am–12:00 pm\nOpen Discussion\nRoom G10 \n\nHarvard CMSA thanks Mistral AI for a generous donation of computing credit.
URL:https://cmsa.fas.harvard.edu/event/mml2024/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Event,Programs
ATTACH;FMTTYPE=image/jpeg:https://cmsa.fas.harvard.edu/media/Machine-Learning-Program-poster-1.jpg
END:VEVENT
END:VCALENDAR