BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CMSA - ECPv6.15.17//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:CMSA
X-ORIGINAL-URL:https://cmsa.fas.harvard.edu
X-WR-CALDESC:Events for CMSA
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20170312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20171105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20180311T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20181104T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20190310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20191103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20210824
DTEND;VALUE=DATE:20210825
DTSTAMP:20260405T042958
CREATED:20230705T081718Z
LAST-MODIFIED:20250328T145235Z
UID:10000070-1629763200-1629849599@cmsa.fas.harvard.edu
SUMMARY:Big Data Conference 2021
DESCRIPTION:On August 24\, 2021\, the CMSA hosted our seventh annual Conference on Big Data. The Conference features many speakers from the Harvard community as well as scholars from across the globe\, with talks focusing on computer science\, statistics\, math and physics\, and economics. \nThe 2021 Big Data Conference took place virtually on Zoom. \nOrganizers:  \n\nShing-Tung Yau\, William Caspar Graustein Professor of Mathematics\, Harvard University\nScott Duke Kominers\, MBA Class of 1960 Associate Professor\, Harvard Business\nHorng-Tzer Yau\, Professor of Mathematics\, Harvard University\nSergiy Verstyuk\, CMSA\, Harvard University\n\nSpeakers: \n\nAndrew Blumberg\, University of Texas at Austin\nMoran Koren\, Harvard CMSA\nHima Lakkaraju\, Harvard University\nKatrina Ligett\, The Hebrew University of Jerusalem\n\n\n\n\n\nTime (ET; Boston time)\nSpeaker\nTitle/Abstract\n\n\n9:00AM\nConference Organizers\nIntroduction and Welcome\n\n\n9:10AM – 9:55AM\nAndrew Blumberg\, University of Texas at Austin\nTitle: Robustness and stability for multidimensional persistent homology \nAbstract: A basic principle in topological data analysis is to study the shape of data by looking at multiscale homological invariants. The idea is to filter the data using a scale parameter that reflects feature size. However\, for many data sets\, it is very natural to consider multiple filtrations\, for example coming from feature scale and density. A key question that arises is how such invariants behave with respect to noise and outliers. This talk will describe a framework for understanding those questions and explore open problems in the area.\n\n\n10:00AM – 10:45AM\nKatrina Ligett\, The Hebrew University of Jerusalem\nTitle: Privacy as Stability\, for Generalization \nAbstract: Many data analysis pipelines are adaptive: the choice of which analysis to run next depends on the outcome of previous analyses. Common examples include variable selection for regression problems and hyper-parameter optimization in large-scale machine learning problems: in both cases\, common practice involves repeatedly evaluating a series of models on the same dataset. Unfortunately\, this kind of adaptive re-use of data invalidates many traditional methods of avoiding overfitting and false discovery\, and has been blamed in part for the recent flood of non-reproducible findings in the empirical sciences. An exciting line of work beginning with Dwork et al. in 2015 establishes the first formal model and first algorithmic results providing a general approach to mitigating the harms of adaptivity\, via a connection to the notion of differential privacy. In this talk\, we’ll explore the notion of differential privacy and gain some understanding of how and why it provides protection against adaptivity-driven overfitting. Many interesting questions in this space remain open. \nJoint work with: Christopher Jung (UPenn)\, Seth Neel (Harvard)\, Aaron Roth (UPenn)\, Saeed Sharifi-Malvajerdi (UPenn)\, and Moshe Shenfeld (HUJI). This talk will draw on work that appeared at NeurIPS 2019 and ITCS 2020\n\n\n10:50AM – 11:35AM\nHima Lakkaraju\, Harvard University\nTitle: Towards Reliable and Robust Model Explanations \nAbstract: As machine learning black boxes are increasingly being deployed in domains such as healthcare and criminal justice\, there is growing emphasis on building tools and techniques for explaining these black boxes in an interpretable manner. Such explanations are being leveraged by domain experts to diagnose systematic errors and underlying biases of black boxes. In this talk\, I will present some of our recent research that sheds light on the vulnerabilities of popular post hoc explanation techniques such as LIME and SHAP\, and also introduce novel methods to address some of these vulnerabilities. More specifically\, I will first demonstrate that these methods are brittle\, unstable\, and are vulnerable to a variety of adversarial attacks. Then\, I will discuss two solutions to address some of the vulnerabilities of these methods – (i) a framework based on adversarial training that is designed to make post hoc explanations more stable and robust to shifts in the underlying data; (ii) a Bayesian framework that captures the uncertainty associated with post hoc explanations and in turn allows us to generate explanations with user specified levels of confidences. I will conclude the talk by discussing results from real world datasets to both demonstrate the vulnerabilities in post hoc explanation techniques as well as the efficacy of our aforementioned solutions.\n\n\n11:40AM – 12:25PM\nMoran Koren\, Harvard CMSA\nTitle: A Gatekeeper’s Conundrum \nAbstract: Many selection processes contain a “gatekeeper”. The gatekeeper’s goal is to examine an applicant’s suitability to a proposed position before both parties endure substantial costs. Intuitively\, the introduction of a gatekeeper should reduce selection costs as unlikely applicants are sifted out. However\, we show that this is not always the case as the gatekeeper’s introduction inadvertently reduces the applicant’s expected costs and thus interferes with her self-selection. We study the conditions under which the gatekeeper’s presence improves the system’s efficiency and those conditions under which the gatekeeper’s presence induces inefficiency. Additionally\, we show that the gatekeeper can sometimes improve selection correctness by behaving strategically (i.e.\, ignore her private information with some probability).\n\n\n12:25PM\nConference Organizers\nClosing Remarks
URL:https://cmsa.fas.harvard.edu/event/big-data-conference-2021/
LOCATION:Virtual
CATEGORIES:Big Data Conference,Conference,Event
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/BD_21-Poster.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180818T083000
DTEND;TZID=America/New_York:20180820T172000
DTSTAMP:20260405T042958
CREATED:20230715T083526Z
LAST-MODIFIED:20250304T213419Z
UID:10000084-1534581000-1534785600@cmsa.fas.harvard.edu
SUMMARY:From Algebraic Geometry to Vision and AI: A Symposium Celebrating the Mathematical Work of David Mumford
DESCRIPTION:On August 18 and 20\, 2018\, the Center of Mathematic Sciences and Applications and the Harvard University Mathematics Department hosted a conference on From Algebraic Geometry to Vision and AI: A Symposium Celebrating the Mathematical Work of David Mumford. The talks took place in Science Center\, Hall B. \nSaturday\, August 18th:  A day of talks on Vision\, AI and brain sciences \nMonday\, August 20th: a day of talks on Math \nSpeakers: \n\nStuart Geman\, Brown\nJanos Kollar\, Princeton\nTai Sing Lee\, CMU\nEmanuele Macri\, Northeastern\nJitendra Malik\, Berkeley / FAIR\nPeter Michor\, University of Vienna\nMichael Miller\, Johns Hopkins\nAaron Pixton\, MIT\nJayant Shah\, Northeastern\nJosh Tenenbaum\, MIT\nBurt Totaro\, UCLA\nAvi Wigderson\, IAS\nYing Nian Wu\, UCLA\nLaurent Younes\, Johns Hopkins\nSong-Chun Zhu\, UCLA\n\nOrganizers:\n\nChing-Li Chai\, University of Pennsylvania\nDavid Gu\, Stony Brook University\nAmnon Neeman\, Australian National University\nMark Nitzberg\, University of California at Berkeley\nYang Wang\, Hong Kong University of Science and Technology\nShing-Tung Yau\, Harvard University\nSong-Chun Zhu\, University of California\, Los Angeles\n\nPublication: \nPure and Applied Mathematics Quarterly\nSpecial Issue: In Honor of David Mumford\nGuest Editors: Ching-Li Chai\, Amnon Neeman \n 
URL:https://cmsa.fas.harvard.edu/event/from-algebraic-geometry-to-vision-and-ai-a-symposium-celebrating-the-mathematical-work-of-david-mumford/
LOCATION:Common Room\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Conference,Event
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/Mumford-3.png
END:VEVENT
END:VCALENDAR