During 2025–26, the CMSA will host a seminar on New Technologies in Mathematics, organized by Michael Douglas and Blake Bordelon. This seminar will take place on Wednesdays from 2:00 pm–3:00 pm (Eastern Time). The meetings will take place in Room G10 at the CMSA, 20 Garden Street, Cambridge MA 02138, and some meetings will take place virtually on Zoom or be held in hybrid formats. To learn how to attend, please fill out this form, or contact Michael Douglas (mdouglas@cmsa.fas.harvard.edu).

The schedule will be updated as talks are confirmed.

Seminar videos can be found at the CMSA Youtube site: New Technologies in Mathematics Playlist

  • Why abstraction is the key to intelligence, and what we’re still missing

    https://youtu.be/3Nxe7J07TQY Speaker: Francois Chollet, Google Title: Why abstraction is the key to intelligence, and what we’re still missing Abstract: This talk provides a personal perspective on the way forward towards more human-like and more intelligent artificial systems. Traditionally, symbolic and probabilistic methods have dominated the domains of concept formation, abstraction, and automated reasoning. More recently, deep […]

  • Constructions in combinatorics via neural networks

    https://youtu.be/ufG0YLj_sik Speaker: Adam Wagner, Tel Aviv University Title: Constructions in combinatorics via neural networks Abstract: Recently, significant progress has been made in the area of machine learning algorithms, and they have quickly become some of the most exciting tools in a scientist’s toolbox. In particular, recent advances in the field of reinforcement learning have led […]

  • New results in Supergravity via ML Technology

    https://youtu.be/zJOWdZZcitk Speaker: Thomas Fischbacher, Google Title: New results in Supergravity via ML Technology Abstract: The infrastructure built to power the Machine Learning revolution has many other uses beyond Deep Learning. Starting from a general architecture-level overview over the lower levels of Google’s TensorFlow machine learning library, we review how this has recently helped us to […]

  • Computer-Aided Mathematics and Satisfiability

    https://youtu.be/4wHwqYrCqVQ Speaker: Marijn Heule, Carnegie Mellon University Title: Computer-Aided Mathematics and Satisfiability Abstract: Progress in satisfiability (SAT) solving has made it possible to determine the correctness of complex systems and answer long-standing open questions in mathematics. The SAT solving approach is completely automatic and can produce clever though potentially gigantic proofs. We can have confidence […]

  • Why explain mathematics to computers?

    https://youtu.be/rRGh97sOtKE Speaker: Patrick Massot, Laboratoire de Mathématiques d’Orsay and CNRS Title: Why explain mathematics to computers? Abstract: A growing number of mathematicians are having fun explaining mathematics to computers using proof assistant softwares. This process is called formalization. In this talk I’ll describe what formalization looks like, what kind of things it teaches us, and […]

  • When Computer Algebra Meets Satisfiability: A New Approach to Combinatorial Mathematics

    https://youtu.be/h-LEf4YnWhQ Speakers: Curtis Bright, School of Computer Science, University of Windsor and Vijay Ganesh, Dept. of Electrical and Computer Engineering, University of Waterloo Title: When Computer Algebra Meets Satisfiability: A New Approach to Combinatorial Mathematics Abstract: Solvers for the Boolean satisfiability (SAT) problem have been increasingly used to resolve problems in mathematics due to their […]

  • The Principles of Deep Learning Theory

    Virtual

    https://youtu.be/wXZKoHEzASg Speaker: Dan Roberts, MIT & Salesforce Title: The Principles of Deep Learning Theory Abstract: Deep learning is an exciting approach to modern artificial intelligence based on artificial neural networks. The goal of this talk is to provide a blueprint — using tools from physics — for theoretically analyzing deep neural networks of practical relevance. This […]

  • Hierarchical Transformers are More Efficient Language Models

    Virtual

    https://youtu.be/soqWNyrdjkw Speaker: Piotr Nawrot, University of Warsaw Title: Hierarchical Transformers are More Efficient Language Models Abstract: Transformer models yield impressive results on many NLP and sequence modeling tasks. Remarkably, Transformers can handle long sequences which allows them to produce long coherent outputs: full paragraphs produced by GPT-3 or well-structured images produced by DALL-E. These large language […]

  • Unreasonable effectiveness of the quantum complexity view on quantum many-body physics

    Virtual

    https://youtu.be/wKCgR3aFpnc Speaker: Anurag Anshu, Department of EECS & Challenge Institute for Quantum Computation, UC Berkeley Title: Unreasonable effectiveness of the quantum complexity view on quantum many-body physics Abstract: A central challenge in quantum many-body physics is to estimate the properties of natural quantum states, such as the quantum ground states and Gibbs states. Quantum Hamiltonian complexity […]

  • Machine learning with mathematicians

    https://youtu.be/DMvmcTQuofE Speaker: Alex Davies, DeepMind Title: Machine learning with mathematicians Abstract: Can machine learning be a useful tool for research mathematicians? There are many examples of mathematicians pioneering new technologies to aid our understanding of the mathematical world: using very early computers to help formulate the Birch and Swinnerton-Dyer conjecture and using computer aid to […]

  • Neural diffusion PDEs, differential geometry, and graph neural networks

    https://youtu.be/7KMcXHwQzZs Speaker: Michael Bronstein, University of Oxford and Twitter Title: Neural diffusion PDEs, differential geometry, and graph neural networks Abstract: In this talk, I will make connections between Graph Neural Networks (GNNs) and non-Euclidean diffusion equations. I will show that drawing on methods from the domain of differential geometry, it is possible to provide a […]

  • Toward Demystifying Transformers and Attention

    Virtual

    https://youtu.be/MSw8HV0eHo8 Speaker: Ben Edelman, Harvard Computer Science Title: Toward Demystifying Transformers and Attention Abstract: Over the past several years, attention mechanisms (primarily in the form of the Transformer architecture) have revolutionized deep learning, leading to advances in natural language processing, computer vision, code synthesis, protein structure prediction, and beyond. Attention has a remarkable ability to enable the […]