New Technologies in Mathematics Seminar Series

During the Fall 2020, the CMSA will be hosting a seminar on Computers and Math, organized by Michael Douglas. This seminar will take place on Wednesday from 3:00 – 4:00pm. There will also be a group meeting on Monday from 9:30 – 10:30am. Both of these meetings will take place virtually. To learn how to attend, please fill out this form, or contact Michael Douglas (mdouglas@cmsa.fas.harvard.edu),

The schedule below will be updated as talks are confirmed.

DateSpeakerTitle/Abstract
9/16/2020William Hamilton, McGill University and MILA

Slides
Title: Graph Representation Learning: Recent Advances and Open Challenges

Abstract: Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial if we want systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, most prominently in the development of graph neural networks (GNNs). Advances in GNNs have led to state-of-the-art results in numerous domains, including chemical synthesis, 3D-vision, recommender systems, question answering, and social network analysis. In the first part of this talk I will provide an overview and summary of recent progress in this fast-growing area, highlighting foundational methods and theoretical motivations. In the second part of this talk I will discuss fundamental limitations of the current GNN paradigm and propose open challenges for the theoretical advancement of the field.
9/23/2020Andrea Montanari, Departments of Electrical Engineering and Statistics, Stanford Title: Self-induced regularization from linear regression to neural networks

Abstract: Modern machine learning methods –most noticeably multi-layer neural networks– require to fit highly non-linear models comprising tens of thousands to millions of parameters. Despite this, little attention is paid to the regularization mechanism to control model’s complexity. Indeed, the resulting models are often so complex as to achieve vanishing training error: they interpolate the data.  Despite this, these models generalize well to unseen data : they have small test error. I will discuss several examples of this phenomenon, beginning with a simple linear regression model, and ending with two-layers neural networks in the so-called lazy regime. For these examples precise asymptotics could be determined mathematically, using tools from random matrix theory. I will try to extract a unifying picture.
A common feature is the fact that a complex unregularized nonlinear model becomes essentially
equivalent to a simpler model, which is however regularized in a non-trivial way.
[Based on joint papers with: Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, Feng Ruan, Youngtak Sohn, Jun Yan, Yiqiao Zhong]
10/7/2020Marinka Zitnik, Department of Biomedical Informatics, HarvardTBA
10/14/2020Jeffrey Pennington, Google BrainTBA
11/4/2020Florent Krzakala, Laboratoire de Physique, Ecole Normale SupĂ©rieure, ParisTBA
11/11/2020Eric Mjolsness, Departments of Computer Science and Mathematics, UC IrvineTBA

Related Posts