During 2025–26, the CMSA will host a seminar on New Technologies in Mathematics, organized by Michael Douglas and Blake Bordelon. This seminar will take place on Wednesdays from 2:00 pm–3:00 pm (Eastern Time). The meetings will take place in Room G10 at the CMSA, 20 Garden Street, Cambridge MA 02138, and some meetings will take place virtually on Zoom or be held in hybrid formats. To learn how to attend, please fill out this form, or contact Michael Douglas (mdouglas@cmsa.fas.harvard.edu).

The schedule will be updated as talks are confirmed.

Seminar videos can be found at the CMSA Youtube site: New Technologies in Mathematics Playlist

  • Triple Descent and a Fine-Grained Bias-Variance Decomposition

    Speaker: Jeffrey Pennington, Google Brain Title: Triple Descent and a Fine-Grained Bias-Variance Decomposition Abstract: Classical learning theory suggests that the optimal generalization performance of a machine learning model should occur at an intermediate model complexity, striking a balance between simpler models that exhibit high bias and more complex models that exhibit high variance of the […]

  • Generalization bounds for rational self-supervised learning algorithms, or “Understanding generalizations requires rethinking deep learning”

    https://youtu.be/aVB1qFPeEmo Speakers: Boaz Barak and Yamini Bansal, Harvard University Dept. of Computer Science Title: Generalization bounds for rational self-supervised learning algorithms, or "Understanding generalizations requires rethinking deep learning" Abstract: The generalization gap of a learning algorithm is the expected difference between its performance on the training data and its performance on fresh unseen test samples. […]

  • Some exactly solvable models for machine learning via Statistical physics

    Virtual

    https://youtu.be/uUUeTYzMu0Q Speaker: Florent Krzakala, EPFL Title: Some exactly solvable models for machine learning via Statistical physics Abstract: The increasing dimensionality of data in the modern machine learning age presents new challenges and opportunities. The high dimensional settings allow one to use powerful asymptotic methods from probability theory and statistical physics to obtain precise characterizations and […]

  • Towards AI for mathematical modeling of complex biological systems: Machine-learned model reduction, spatial graph dynamics, and symbolic mathematics

    Virtual

    https://youtu.be/t4xRwWxTzSg Speaker: Eric Mjolsness, Departments of Computer Science and Mathematics, UC Irvine Title: Towards AI for mathematical modeling of complex biological systems: Machine-learned model reduction, spatial graph dynamics, and symbolic mathematics Abstract: The complexity of biological systems (among others) makes demands on the complexity of the mathematical modeling enterprise that could be satisfied with mathematical […]

  • Universes as Big Data, or Machine-Learning Mathematical Structures

    Virtual

    https://youtu.be/zj_Xc2QG-vw Speaker: Yang-Hui He, Oxford University, City University of London and Nankai University Title: Universes as Big Data, or Machine-Learning Mathematical Structures Abstract: We review how historically the problem of string phenomenology lead theoretical physics first to algebraic/differetial geometry, and then to computational geometry, and now to data science and AI. With the concrete playground […]

  • Machine learning and su(3) structures on six manifolds

    Virtual

    Speaker: James Gray - Virginia Tech Title: Machine learning and su(3) structures on six manifolds Abstract: In this talk we will discuss the application of Machine Learning techniques to obtain numerical approximations to various metrics of SU(3) structure on six manifolds. More precisely, we will be interested in SU(3) structures whose torsion classes make them suitable […]

  • AI and Theorem Proving

    Virtual

    https://youtu.be/UnYrWuOzOlc Speaker: Josef Urban, Czech Technical University Title: AI and Theorem Proving Abstract: The talk will discuss the main approaches that combine machine learning with automated theorem proving and automated formalization. This includes learning to choose relevant facts for “hammer” systems, guiding the proof search of tableaux and superposition automated provers by interleaving learning and […]

  • Language Modeling for Mathematical Reasoning

    Virtual

    Speaker: Christian Szegedy Title: Language Modeling for Mathematical Reasoning Abstract: In this talk, I will summarize the current state of the art of transformer based language models and give examples on non-trivial reasoning task language models can solve in higher order logic reasoning. I will also discuss how to inject injective bias into transformer networks via pretraining on […]

  • Knowledge graph representation: From recent models towards a theoretical understanding

    Speaker: Carl Allen and Ivana Balažević - University of Edinburgh School of Informatics Title: Knowledge graph representation: From recent models towards a theoretical understanding Abstract: Knowledge graphs (KGs), or knowledge bases, are large repositories of facts in the form of triples (subject, relation, object), e.g. (Edinburgh, capital_of, Scotland). Many models have been developed to succinctly represent KGs […]

  • A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks

    https://youtu.be/OoimTbnSe7I Speaker: Nikunj Saunshi, Dept. of Computer Science, Princeton University Title: A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks Abstract: Autoregressive language models pretrained on large corpora have been successful at solving downstream tasks, even with zero-shot usage. However, there is little theoretical justification for their success. This paper considers the following […]

  • A Mathematical Language

      Speaker: Thomas Hales, Univ. of Pittsburgh Dept. of Mathematics Title: A Mathematical Language Abstract: A controlled natural language for mathematics is an artificial language that is designed in an explicit way with precise computer-readable syntax and semantics.  It is based on a single natural language (which for us is English) and can be broadly […]

  • Neural Theorem Proving in Lean using Proof Artifact Co-training and Language Models

    Virtual

    https://youtu.be/EXpmbAfBNnw Speaker: Jason Rute, CIBO Technologies Title: Neural Theorem Proving in Lean using Proof Artifact Co-training and Language Models Abstract: Labeled data for imitation learning of theorem proving in large libraries of formalized mathematics is scarce as such libraries require years of concentrated effort by human specialists to be built. This is particularly challenging when applying […]