During 2025–26, the CMSA will host a seminar on New Technologies in Mathematics, organized by Michael Douglas and Blake Bordelon. This seminar will take place on Wednesdays from 2:00 pm–3:00 pm (Eastern Time). The meetings will take place in Room G10 at the CMSA, 20 Garden Street, Cambridge MA 02138, and some meetings will take place virtually on Zoom or be held in hybrid formats. To learn how to attend, please fill out this form, or contact Michael Douglas (mdouglas@cmsa.fas.harvard.edu).

The schedule will be updated as talks are confirmed.

Seminar videos can be found at the CMSA Youtube site: New Technologies in Mathematics Playlist

  • When Computer Algebra Meets Satisfiability: A New Approach to Combinatorial Mathematics

    https://youtu.be/h-LEf4YnWhQ Speakers: Curtis Bright, School of Computer Science, University of Windsor and Vijay Ganesh, Dept. of Electrical and Computer Engineering, University of Waterloo Title: When Computer Algebra Meets Satisfiability: A New Approach to Combinatorial Mathematics Abstract: Solvers for the Boolean satisfiability (SAT) problem have been increasingly used to resolve problems in mathematics due to their […]

  • The Principles of Deep Learning Theory

    Virtual

    https://youtu.be/wXZKoHEzASg Speaker: Dan Roberts, MIT & Salesforce Title: The Principles of Deep Learning Theory Abstract: Deep learning is an exciting approach to modern artificial intelligence based on artificial neural networks. The goal of this talk is to provide a blueprint — using tools from physics — for theoretically analyzing deep neural networks of practical relevance. This […]

  • Hierarchical Transformers are More Efficient Language Models

    Virtual

    https://youtu.be/soqWNyrdjkw Speaker: Piotr Nawrot, University of Warsaw Title: Hierarchical Transformers are More Efficient Language Models Abstract: Transformer models yield impressive results on many NLP and sequence modeling tasks. Remarkably, Transformers can handle long sequences which allows them to produce long coherent outputs: full paragraphs produced by GPT-3 or well-structured images produced by DALL-E. These large language […]

  • Unreasonable effectiveness of the quantum complexity view on quantum many-body physics

    Virtual

    https://youtu.be/wKCgR3aFpnc Speaker: Anurag Anshu, Department of EECS & Challenge Institute for Quantum Computation, UC Berkeley Title: Unreasonable effectiveness of the quantum complexity view on quantum many-body physics Abstract: A central challenge in quantum many-body physics is to estimate the properties of natural quantum states, such as the quantum ground states and Gibbs states. Quantum Hamiltonian complexity […]

  • Machine learning with mathematicians

    https://youtu.be/DMvmcTQuofE Speaker: Alex Davies, DeepMind Title: Machine learning with mathematicians Abstract: Can machine learning be a useful tool for research mathematicians? There are many examples of mathematicians pioneering new technologies to aid our understanding of the mathematical world: using very early computers to help formulate the Birch and Swinnerton-Dyer conjecture and using computer aid to […]

  • Neural diffusion PDEs, differential geometry, and graph neural networks

    https://youtu.be/7KMcXHwQzZs Speaker: Michael Bronstein, University of Oxford and Twitter Title: Neural diffusion PDEs, differential geometry, and graph neural networks Abstract: In this talk, I will make connections between Graph Neural Networks (GNNs) and non-Euclidean diffusion equations. I will show that drawing on methods from the domain of differential geometry, it is possible to provide a […]

  • Toward Demystifying Transformers and Attention

    Virtual

    https://youtu.be/MSw8HV0eHo8 Speaker: Ben Edelman, Harvard Computer Science Title: Toward Demystifying Transformers and Attention Abstract: Over the past several years, attention mechanisms (primarily in the form of the Transformer architecture) have revolutionized deep learning, leading to advances in natural language processing, computer vision, code synthesis, protein structure prediction, and beyond. Attention has a remarkable ability to enable the […]

  • Bootstrapping hyperbolic manifolds

    Virtual

    https://youtu.be/updzX0XPYU4 Speaker: James Bonifacio, Cambridge DAMTP Title: Bootstrapping hyperbolic manifolds Abstract: Hyperbolic manifolds are a class of Riemannian manifolds that are important in mathematics and physics, playing a prominent role in topology, number theory, and string theory. Associated with a given hyperbolic metric is a sequence of numbers corresponding to the discrete eigenvalues of the […]

  • Scaling Laws and Their Implications for Coding AI

    Virtual

    https://youtu.be/Suhp3OLASSo Speaker: Jared Kaplan, Johns Hopkins Dept. of Physics & Astronomy Title: Scaling Laws and Their Implications for Coding AI Abstract:  Scaling laws and associated downstream trends can be used as an organizing principle when thinking about current and future ML progress.  I will briefly review scaling laws for generative models in a number of […]

  • Machine Learning 30 STEM Courses in 12 Departments

    https://youtu.be/QaOZCa8SFvA Speaker: Iddo Drori, MIT EE&CS and Columbia School of Engineering Title: Machine Learning 30 STEM Courses in 12 Departments Abstract: We automatically solve, explain, and generate university-level course problems from thirty STEM courses (at MIT, Harvard, and Columbia) for the first time. We curate a new dataset of course questions and answers across a dozen […]

  • Formal Mathematics Statement Curriculum Learning

    https://youtu.be/4zINaGrPc9M Speaker: Stanislas Polu, OpenAI Title: Formal Mathematics Statement Curriculum Learning Abstract: We explore the use of expert iteration in the context of language modeling applied to formal mathematics. We show that at same compute budget, expert iteration, by which we mean proof search interleaved with learning, dramatically outperforms proof search only.  We also observe that […]

  • Memorizing Transformers

    Virtual

    https://youtu.be/5AoOpFFjW28 Speaker: Yuhuai Wu, Stanford and Google Title: Memorizing Transformers Abstract: Language models typically need to be trained or fine-tuned in order to acquire new knowledge, which involves updating their weights. We instead envision language models that can simply read and memorize new data at inference time, thus acquiring new knowledge immediately. In this talk, I […]