During the Spring 2021 semester, and until further notice, all seminars will take place virtually.
The 2020-2021 Colloquium will take place every Wednesday from 9:00 to 10:00am ET virtually, using zoom. All CMSA postdocs/members are required to attend the weekly CMSA Members’ Seminars, as well as the weekly CMSA Colloquium series. Please email the seminar organizers to obtain a link. This year’s colloquium will be organized by Wei Gu and Sergiy Verstyuk. The schedule below will be updated as speakers are confirmed.
To learn how to attend, please fill out this form.
Information on previous colloquia can be found here.
|Evelyn Tang (Max Planck Institute for Dynamics and Self-Organization)
|Title: Topology protects chiral edge currents in stochastic systems
Abstract: Living systems can exhibit time-scales much longer than those of the underlying components, as well as collective dynamical behavior. How such global behavior is subserved by stochastic constituents remains unclear. I will present two-dimensional stochastic networks that consist of out-of-equilibrium cycles at the molecular scale and support chiral edge currents in configuration space. I will discuss the topological properties of these networks and their uniquely non-Hermitian features such as exceptional points and vorticity. As these emergent edge currents are associated to macroscopic timescales and length scales, simply tuning a small number of parameters enables varied dynamical phenomena including a global clock, stochastic growth and shrinkage, and synchronization.
|André Luiz de Gouvêa (Northwestern)
|Title: The Brave Nu World
Abstract: Neutrinos are the least understood of the fundamental particles that make up the so-called Standard Model of Particle Physics. Measuring neutrino properties and identifying how they inform our understanding of nature at the smallest distant scales is among the highest priorities of particle physics research today. I will discuss our current understanding of neutrinos, concentrating on the observation of neutrino oscillations and neutrino masses, along with all the open questions that came of these discoveries from the end of the 20th century.
|Mykhaylo Shkolnikov (Princeton)
|Title: Probabilistic approach to free boundary problems and applications
Abstract: We will discuss a recently developed probabilistic approach to (singular) free boundary problems, such as the supercooled Stefan problem. The approach is based on a new notion of solution, referred to as probabilistic, which arises naturally in the context of large system limits of interacting particle systems. In the talk, I will give an example of how such interacting particle systems arise in applications (e.g., finance), then obtain a solution of a free boundary problem in the large system limit, and discuss how this solution can be analyzed mathematically (thereby answering natural questions about the systemic risk in financial systems and neural synchronization in the brain). The talk is based on recent and ongoing joint works with Sergey Nadtochiy, Francois Delarue, Jiacheng Zhang and Xiling Zhang
9:00 – 10:00PM ET
|C. Seshadhri (UC Santa Cruz)
|Title: Studying the (in)effectiveness of low dimensional graph embeddings
Abstract: Low dimensional graph embeddings are a fundamental and popular tool used for machine learning on graphs. Given a graph, the basic idea is to produce a low-dimensional vector for each vertex, such that “similarity” in geometric space corresponds to “proximity” in the graph. These vectors can then be used as features in a plethora of machine learning tasks, such as link prediction, community labeling, recommendations, etc. Despite many results emerging in this area over the past few years, there is less study on the core premise of these embeddings. Can such low-dimensional embeddings effectively capture the structure of real-world (such as social) networks? Contrary to common wisdom, we mathematically prove and empirically demonstrate that popular low-dimensional graph embeddings do not capture salient properties of real-world networks. We mathematically prove that common low-dimensional embeddings cannot generate graphs with both low average degree and large clustering coefficients, which have been widely established to be empirically true for real-world networks. Empirically, we observe that the embeddings generated by popular methods fail to recreate the triangle structure of real-world networks, and do not perform well on certain community labeling tasks. (Joint work with Ashish Goel, Caleb Levy, Aneesh Sharma, and Andrew Stolman.)
|David Ben-Zvi (U Texas)
|Title: Electric-Magnetic Duality for Periods and L-functions
Abstract: I will describe joint work with Yiannis Sakellaridis and Akshay Venkatesh, in which ideas originating in quantum field theory are applied to a problem in number theory.
|Omer Tamuz (Caltech)
|Title: Monotone Additive Statistics
Abstract: How should a random quantity be summarized by a single number? We study mappings from random variables to real numbers, focussing on those with the following two properties: (1) monotonicity with respect to first-order stochastic dominance, and (2) additivity for sums of independent random variables. This problem turns out to be connected to the following question: Under what conditions on the random variables X and Y does there exist an independent Z so that X + Z first-order stochastically dominates Y + Z?
(Joint work with Tobias Fritz, Xiaosheng Mu, Luciano Pomatto and Philipp Strack.)
|Piotr Indyk (MIT)
|Title: Learning-Based Sampling and Streaming
Abstract: Classical algorithms typically provide “one size fits all” performance, and do not leverage properties or patterns in their inputs. A recent line of work aims to address this issue by developing algorithms that use machine learning predictions to improve their performance. In this talk I will present two examples of this type, in the context of streaming and sampling algorithms. In particular, I will show how to use machine learning predictions to improve the performance of (a) low-memory streaming algorithms for frequency estimation (ICLR’19), and (b) sampling algorithms for estimating the support size of a distribution (ICLR’21). Both algorithms use an ML-based predictor that, given a data item, estimates the number of times the item occurs in the input data set. (The talk will cover material from papers co-authored with T Eden, CY Hsu, D Katabi, S Narayanan, R Rubinfeld, S Silwal, T Wagner and A Vakilian.
|Chiu-Chu Melissa Liu (Columbia)
|Title: Topological Recursion and Crepant Transformation Conjecture
Abstract: The Crepant Transformation Conjecture (CTC), first proposed by Yongbin Ruan and later refined/generalized by others, relates Gromov-Witten (GW) invariants of K-equivalent smooth varieties or smooth Deligne-Mumford stacks. We will outline a proof of all-genus open and closed CTC for symplectic toric Calabi-Yau 3-orbifolds based on joint work with Bohan Fang, Song Yu, and Zhengyu Zong. Our proof relies on the Remodeling Conjecture (proposed by Bouchard-Klemm-Marino-Pasquetti and proved in full generality by Fang, Zong and the speaker) relating open and closed GW invariants of a symplectic toric Calabi-Yau 3-orbifold to invariants of its mirror curve defined by Chekhov-Eynard-Orantin Topological Recursion.
|Weinan E (Princeton)
|Title: Machine Learning and PDEs
Abstract: I will discuss two topics:
|Thore Graepel (DeepMind/UCL)
|Title: From AlphaGo to MuZero – Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Abstract: Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games – the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled – our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.
|Kui Ren (Columbia)
|Title: Inversion via Optimization: Revisiting the Classical Least-Squares Formulation of Inverse Problems
Abstract: The classical least-squares formulation of inverse problems has provided a successful framework for the computational solutions of those problems. In recent years, modifications and alternatives have been proposed to overcome some of the disadvantages of this classical formulation in dealing with new applications. This talk intends to provide an (likely biased) overview of the recent development in constructing new least-squares formulations for model and data-driven solutions of inverse problems.
|Siu-Cheong Lau (Boston U)
|Title: An algebro-geometric formulation of computing machines
Abstract: Neural network in machine learning has obvious similarity with quiver representation theory. The main gap between the two subjects is that network functions produced from two isomorphic quiver representations are not equal, due to the presence of non-linear activation functions which are not equivariant under the automorphism group. This violates the important math/physics principle that isomorphic objects should produce the same results. In this talk, I will introduce a general formulation using moduli spaces of framed modules of (noncommutative) algebra and fix this gap. Metrics over the moduli space are crucial. I will also explain uniformization between spherical, Euclidean and hyperbolic moduli.
|Vasco Carvalho (Cambridge)
|Title: The Economy as a Complex Production Network
Abstract: A modern economy is an intricately linked web of specialized production units, each relying on the flow of inputs from their suppliers to produce their own output, which in turn is routed towards other downstream units. From this production network vantage point we: (i) present the theoretical foundations for the role of such input linkages as a shock propagation channel and as a mechanism for transforming micro-level shocks into macroeconomic, economy-wide fluctuations (ii) selectively survey both empirical and simulation-based studies that attempt to ascertain the relevance and quantitative bite of this argument and (time permitting) (iii) discuss a range of domains where this networked production view is currently being extended to.
9:00 – 10:00pm ET
|Shamit Kachru (Stanford)
|Title: K3 Metrics from String Theory
Abstract: Calabi-Yau manifolds have played a central role in important developments in string theory and mathematical physics. Famously, they admit Ricci flat metrics — but the proof of that fact is not constructive, and the metrics remain mysterious. K3 is perhaps the simplest non-trivial compact Calabi-Yau space. In this talk, I describe two different methods of constructing (smooth, Ricci flat) K3 metrics, and a string theory duality which relates them. The duality re-sums infinite towers of disc instanton corrections via a purely classical infinite-dimensional hyperkahler quotient construction, which can be practically implemented.
|David Kazhdan (Hebrew University)
|Title: On Applications of Algebraic Combinatorics to Algebraic Geometry
Abstract: I present a derivation of a number of results on morphisms of a high Schmidt’s rank from a result in Algebraic Combinatorics. In particular will explain the flatness of such morphisms and show their fibers have rational singularities.
|Mariangela Lisanti (Princeton University)
|Title: Mapping the Milky Way’s Dark Matter Halo with Gaia
Abstract: The Gaia mission is in the process of mapping nearly 1% of the Milky Way’s stars—-nearly a billion in total. This data set is unprecedented and provides a unique view into the formation history of our Galaxy and its associated dark matter halo. I will review results based on the most recent Gaia data release, demonstrating how the evolution of the Galaxy can be deciphered from the stellar remnants of massive satellite galaxies that merged with the Milky Way early on. This analysis is an inherently “big data” problem, and I will discuss how we are leveraging machine learning techniques to advance our understanding of the Galaxy’s evolution. Our results indicate that the local dark matter is not in equilibrium, as typically assumed, and instead exhibits distinctive dynamics tied to the disruption of satellite galaxies. The updated dark matter map built from the Gaia data has ramifications for direct detection experiments, which search for the interactions of these particles in terrestrial targets.
|Gil Kalai (Hebrew University and IDC Herzliya)
|Title: Statistical, mathematical, and computational aspects of noisy intermediate-scale quantum computers
Abstract: Noisy intermediate-scale quantum (NISQ) Computers hold the key for important theoretical and experimental questions regarding quantum computers. In the lecture I will describe some questions about mathematics, statistics and computational complexity which arose in my study of NISQ systems and are related to
|Marta Lewicka (University of Pittsburgh)
|Title: Quantitative immersability of Riemann metrics and the infinite hierarchy of prestrained shell models
Abstract: We propose results that relate the following two contexts:
|Jonathan Heckman (University of Pennsylvania)
|Title: Top Down Approach to Quantum Fields
Abstract: Quantum Field theory (QFT) is the common language of particle physicists, cosmologists, and condensed matter physicists. Even so, many fundamental aspects of QFT remain poorly understood. I discuss some of the recent progress made in understanding QFT using the geometry of extra dimensions predicted by string theory, highlighting in particular the special role of seemingly “exotic” higher-dimensional supersymmetric QFTs with no length scales known as six-dimensional superconformal field theories (6D SCFTs). We have recently classified all examples of such 6D SCFTs, and are now using this to extra observables from strongly correlated systems in theories with more than four spacetime dimensions, as well as in spacetimes with four or fewer spacetime dimensions. Along the way, I will also highlight the remarkable interplay between physical and mathematical structures in the study of such systems
|Surya Ganguli (Stanford)
|Title: Weaving together machine learning, theoretical physics, and neuroscience through mathematics
Abstract: An exciting area of intellectual activity in this century may well revolve around a synthesis of machine learning, theoretical physics, and neuroscience. The unification of these fields will likely enable us to exploit the power of complex systems analysis, developed in theoretical physics and applied mathematics, to elucidate the design principles governing neural systems, both biological and artificial, and deploy these principles to develop better algorithms in machine learning. We will give several vignettes in this direction, including: (1) determining the best optimization problem to solve in order to perform regression in high dimensions; (2) finding exact solutions to the dynamics of generalization error in deep linear networks; (3) developing interpretable machine learning to derive and understand state of the art models of the retina; (4) analyzing and explaining the origins of hexagonal firing patterns in recurrent neural networks trained to path-integrate; (5) delineating fundamental theoretical limits on the energy, speed and accuracy with which non-equilibrium sensors can detect signals
|Kevin Buzzard (Imperial College London)
|Title: Teaching proofs to computers
Abstract: A mathematical proof is a sequence of logical statements in a precise language, obeying some well-defined rules. In that sense it is very much like a computer program. Various computer tools have appeared over the last 50 years which take advantage of this analogy by turning the mathematical puzzle of constructing a proof of a theorem into a computer game. The newest tools are now capable of understanding some parts of modern research mathematics. In spite of this, these tools are not used in mathematics departments, perhaps because they are not yet capable of telling mathematicians *something new*.
|Jose A. Scheinkman (Columbia)
|Title: Re-pricing avalanches
Abstract: Monthly aggregate price changes exhibit chronic fluctuations but the aggregate shocks that drive these fluctuations are often elusive. Macroeconomic models often add stochastic macro-level shocks such as technology shocks or monetary policy shocks to produce these aggregate fluctuations. In this paper, we show that a state-dependent pricing model with a large but finite number of firms is capable of generating large fluctuations in the number of firms that adjust prices in response to an idiosyncratic shock to a firm’s cost of price adjustment. These fluctuations, in turn, cause fluctuations in aggregate price changes even in the absence of aggregate shocks. (Joint work with Makoto Nirei.)
|Eric J. Heller (Harvard)
|Title: Branched Flow
Abstract: In classical and quantum phase space flow, there exists a regime of great physical relevance that is belatedly but rapidly generating a new field. In evolution under smooth, random, weakly deflecting but persistent perturbations, a remarkable regime develops, called branched flow. Lying between the first cusp catastrophes at the outset, leading to fully chaotic statistical flow much later, lies the visually beautiful regime of branched flow. It applies to tsunami wave propagation, freak wave formation, light propagation, cosmic microwaves arriving from pulsars, electron flow in metals and devices, sound propagation in the atmosphere and oceans, the large scale structure of the universe, and much more. The mathematical structure of this flow is only partially understood, involving exponential instability coexisting with “accidental” stability. The flow is qualitatively universal, but this has not been quantified. Many questions arise, including the scale(s) of the random medium, and the time evolution of manifolds and “fuzzy” manifolds in phase space. The classical-quantum (ray-wave) correspondence in this flow is only partially understood. This talk will be an introduction to the phenomenon, both visual and mathematical, emphasizing unanswered questions
|Douglas Arnold (U of Minnesota)
|Title: Preserving geometry in numerical discretization
Abstract: An important design principle for numerical methods for differential equations is that the discretizations preserve key geometric, topological, and algebraic structures of the original differential system. For ordinary differential equations, such geometric integrators were developed at the end of the last century, enabling stunning computations in celestial mechanics and other applications that would have been impossible without them. Since then, structure-preserving discretizations have been developed for partial differential equations. One of the prime examples has been the finite element exterior calculus or FEEC, in which the structures to preserve are related to Hilbert complexes underlying the PDEs, the de Rham complex being a canonical example. FEEC has led to highly successful new numerical methods for problems in fluid mechanics, electromagnetism, and other applications which relate to the de Rham complex. More recently, new tools have been developed which extend the applications of FEEC far beyond the de Rham complex, leading to progress in discretizations of problems from solid mechanics, materials science, and general relativity.
|Manuel Blum and Lenore Blum (Carnegie Mellon)
|Title: What can Theoretical Computer Science Contribute to the Discussion of Consciousness?
Abstract: The quest to understand consciousness, once the purview of philosophers and theologians, is now actively pursued by scientists of many stripes. We study consciousness from the perspective of theoretical computer science. This is done by formalizing the Global Workspace Theory (GWT) originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, and others. We give a precise formal definition of a Conscious Turing Machine (CTM), also called Conscious AI, in the spirit of Alan Turing’s simple yet powerful definition of a computer. We are not looking for a complex model of the brain nor of cognition but for a simple model of (the admittedly complex concept of) consciousness.