BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CMSA - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://cmsa.fas.harvard.edu
X-WR-CALDESC:Events for CMSA
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20190310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20191103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200923T150000
DTEND;TZID=America/New_York:20200923T160000
DTSTAMP:20260511T100615
CREATED:20240209T014918Z
LAST-MODIFIED:20240515T201416Z
UID:10001784-1600873200-1600876800@cmsa.fas.harvard.edu
SUMMARY:Self-induced regularization from linear regression to neural networks
DESCRIPTION:Speaker: Andrea Montanari\, Departments of Electrical Engineering and Statistics\, Stanford \nTitle: Self-induced regularization from linear regression to neural networks \nAbstract: Modern machine learning methods –most noticeably multi-layer neural networks– require to fit highly non-linear models comprising tens of thousands to millions of parameters. Despite this\, little attention is paid to the regularization mechanism to control model’s complexity. Indeed\, the resulting models are often so complex as to achieve vanishing training error: they interpolate the data. Despite this\, these models generalize well to unseen data : they have small test error. I will discuss several examples of this phenomenon\, beginning with a simple linear regression model\, and ending with two-layers neural networks in the so-called lazy regime. For these examples precise asymptotics could be determined mathematically\, using tools from random matrix theory. I will try to extract a unifying picture. A common feature is the fact that a complex unregularized nonlinear model becomes essentially equivalent to a simpler model\, which is however regularized in a non-trivial way. [Based on joint papers with: Behrooz Ghorbani\, Song Mei\, Theodor Misiakiewicz\, Feng Ruan\, Youngtak Sohn\, Jun Yan\, Yiqiao Zhong] \n  \n 
URL:https://cmsa.fas.harvard.edu/event/9-23-2020-new-tech-in-mathematics-seminar/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-New-Technologies-in-Mathematics-09.23.20.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200916T150000
DTEND;TZID=America/New_York:20200916T160000
DTSTAMP:20260511T100615
CREATED:20240209T021453Z
LAST-MODIFIED:20240515T183741Z
UID:10001799-1600268400-1600272000@cmsa.fas.harvard.edu
SUMMARY:Graph Representation Learning: Recent Advances and Open Challenges
DESCRIPTION:Speaker: William Hamilton\, McGill University and MILA \nTitle: Graph Representation Learning: Recent Advances and Open Challenges \nAbstract: Graph-structured data is ubiquitous throughout the natural and social sciences\, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial if we want systems that can learn\, reason\, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning\, most prominently in the development of graph neural networks (GNNs). Advances in GNNs have led to state-of-the-art results in numerous domains\, including chemical synthesis\, 3D-vision\, recommender systems\, question answering\, and social network analysis. In the first part of this talk I will provide an overview and summary of recent progress in this fast-growing area\, highlighting foundational methods and theoretical motivations. In the second part of this talk I will discuss fundamental limitations of the current GNN paradigm and propose open challenges for the theoretical advancement of the field. \n 
URL:https://cmsa.fas.harvard.edu/event/9-16-2020-new-technologies-seminar/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-New-Technologies-in-Mathematics-09.16.20-1.png
END:VEVENT
END:VCALENDAR