Loading Events

« All Events

  • This event has passed.

SMaSH: Symposium for Mathematical Sciences at Harvard

May 17, 2022 @ 9:00 am - 6:00 pm

SMaSH_2022-2

SMaSH: Symposium for Mathematical Sciences at Harvard

On Tuesday, May 17, 2022, from 9:00 am – 5:30 pm, the Harvard John A Paulson School of Engineering and Applied Sciences (SEAS) and the Harvard Center of Mathematical Sciences and Applications (CMSA) held a Symposium for Mathematical Sciences for the mathematical sciences community at Harvard.

Organizing Committee

  • Michael Brenner, Applied Mathematics (SEAS)
  • Michael Desai, Organismic and Evolutionary Biology (FAS)
  • Sam Gershman, Psychology (FAS)
  • Michael Hopkins, Mathematics (FAS)
  • Gary King, Government (FAS)
  • Peter Koellner, Philosophy (FAS)
  • Scott Kominers, Economics (FAS) & Entrepreneurial Management (HBS)
  • Xihong Lin, Biostatistics (HSPH) & Statistics (FAS)
  • Yue Lu, Electrical Engineering (SEAS)
  • Susan Murphy, Statistics (FAS) & Computer Science (SEAS)
  • Lisa Randall, Physics (SEAS)
  • Eugene Shakhnovich, Chemistry (FAS)
  • Salil Vadhan, Computer Science (SEAS)
  • Horng-Tzer Yau, Mathematics (FAS)

This event was held in-person at the Science and Engineering Complex (SEC) at 150 Western Ave, Allston, MA 02134, and streamed on Zoom.

Harvard graduate students and postdocs presented Poster Sessions.


Venue: Science and Engineering Complex (SEC)


Speakers

  • Anurag Anshu, Computer Science (SEAS)
  • Morgane Austern, Statistics (FAS)
  • Demba Ba, Electrical Engineering & Bioengineering (SEAS)
  • Michael Brenner, Applied Mathematics (SEAS)
  • Rui Duan, Biostatistics (HSPH)
  • Yannai A. Gonczarowski, Economics (FAS) & Computer Science (SEAS)
  • Kosuke Imai, Government & Statistics (FAS)
  • Sham M. Kakade, Computer Science (SEAS) & Statistics (FAS)
  • Seth Neel, Technology & Operations Management (HBS)
  • Melanie Matchett Wood, Mathematics (FAS)

Schedule PDF

Schedule

9:00–9:30 am Coffee and Breakfast
West Atrium (ground floor of the SEC)
9:30–10:30 am Faculty Talks
Winokur Family Hall Classroom (Room 1.321) located just off of the West AtriumKosuke Imai, Government & Statistics (FAS): Use of Simulation Algorithms for Legislative Redistricting Analysis and EvaluationYannai A. Gonczarowski, Economics (FAS) & Computer Science (SEAS): The Sample Complexity of Up-to-ε Multi-Dimensional Revenue Maximization
10:30–11:00 am Coffee Break
West Atrium (ground floor of the SEC)
11:00–12:00 pm Faculty Talks
Winokur Family Hall Classroom (Room 1.321) located just off of the West AtriumSeth Neel, Technology & Operations Management (HBS): “Machine (Un)Learning” or Why Your Deployed Model Might Violate Existing Privacy LawDemba Ba, Electrical Engineering & Bioengineering (SEAS): Geometry, AI, and the Brain
12:00–1:00 pm Lunch Break
Engineering Yard Tent
1:00–2:30 pm Faculty Talks
Winokur Family Hall Classroom (Room 1.321) located just off of the West AtriumMelanie Matchett Wood, Mathematics (FAS): Understanding distributions of algebraic structures through their momentsMorgane Austern, Statistics (FAS): Limit theorems for structured random objectsAnurag Anshu, Computer Science (SEAS): Operator-valued polynomial approximations and their use.
2:30–3:00 pm Coffee Break
West Atrium (ground floor of the SEC)
3:00–4:30 pm Faculty Talks
Winokur Family Hall Classroom (Room 1.321) located just off of the West AtriumMichael Brenner, Applied Mathematics (SEAS): Towards living synthetic materialsRui Duan, Biostatistics (HSPH): Federated and transfer learning for healthcare data integrationSham M. Kakade, Computer Science (SEAS) & Statistics (FAS): What is the Statistical Complexity of Reinforcement Learning?
4:30–5:30 pm Reception with Jazz musicians
& Poster Session
Engineering Yard Tent

Faculty Talks

Speaker Title / Abstract / Bio
Anurag Anshu, Computer Science (SEAS) Title: Operator-valued polynomial approximations and their use.

Abstract: Approximation of complicated functions with low degree polynomials is an indispensable tool in mathematics. This becomes particularly relevant in computer science, where the complexity of interesting functions is often captured by the degree of the approximating polynomials. This talk concerns the approximation of operator-valued functions (such as the exponential of a hermitian matrix, or the intersection of two projectors) with low-degree operator-valued polynomials. We will highlight the challenges that arise in achieving similarly good approximations as real-valued functions, as well as recent methods to overcome them. We will discuss applications to the ground states in physics and quantum complexity theory: correlation lengths, area laws and concentration bounds.

Bio: Anurag Anshu is an Assistant Professor of computer science at Harvard University. He spends a lot of time exploring the rich structure of quantum many-body systems from the viewpoint of quantum complexity theory, quantum learning theory and quantum information theory. He held postdoctoral positions at University of California, Berkeley and University of Waterloo and received his PhD from National University of Singapore, focusing on quantum communication complexity.

Morgane Austern, Statistics (FAS) Title: Limit theorems for structured random objects

Abstract: Statistical inference relies on numerous tools from probability theory to study the properties of estimators. Some of the most central ones are the central limit theorem and the free central limit theorem. However, these same tools are often inadequate to study modern machine problems that frequently involve structured data (e.g networks) or complicated dependence structures (e.g dependent random matrices). In this talk, we extend universal limit theorems beyond the classical setting. We consider distributionally “structured’ and dependent random object i.e random objects whose distribution is invariant under the action of an amenable group. We show, under mild moment and mixing conditions, a series of universal second and third order limit theorems: central-limit theorems, concentration inequalities, Wigner semi-circular law and Berry-Esseen bounds. The utility of these will be illustrated by a series of examples in machine learning, network and information theory.

Bio: Morgane Austern is an assistant professor in the Statistics Department of Harvard University. Broadly, she is interested in developing probability tools for modern machine learning and in establishing the properties of learning algorithms in structured and dependent data contexts. She graduated with a PhD in statistics from Columbia University in 2019 where she worked in collaboration with Peter Orbanz and Arian Maleki on limit theorems for dependent and structured data. She was a postdoctoral researcher at Microsoft Research New England from 2019 to 2021.

Demba Ba, Electrical Engineering & Bioengineering (SEAS) Title: Geometry, AI, and the Brain

Abstract: A large body of experiments suggests that neural computations reflect, in some sense, the geometry of “the world”. How do artificial and neural systems learn representations of “the world” that reflect its geometry? How, for instance, do we, as humans, learn representations of objects, e.g. fruits, that reflect the geometry of object space? Developing artificial systems that can capture/understand the geometry of the data they process may enable them to learn representations useful in many different contexts and tasks. My talk will describe an artificial neural-network architecture that, starting from a simple union-of-manifold model of data comprising objects from different categories, mimics some aspects of how primates learn, organize, and retrieve concepts, in a manner that respects the geometry of object space.

Bio: Demba Ba serves as an Associate Professor of electrical engineering and bioengineering in Harvard University’s School of Engineering and Applied Sciences, where he directs the CRISP group. Recently, he has taken a keen interest in the connection between artificial neural networks and sparse signal processing. His group leverages this connection to solve data-driven unsupervised learning problems in neuroscience, to understand the principles of hierarchical representations of sensory signals in the brain, and to develop explainable AI. In 2016, he received a Research Fellowship in Neuroscience from the Alfred P. Sloan Foundation. In 2021, Harvard’s Faculty of Arts and Sciences awarded him the Roslyn Abramson award for outstanding undergraduate teaching.

Michael Brenner, Applied Mathematics (SEAS) Title: Towards living synthetic materials

Abstract: Biological materials are much more complicated and functional than synthetic ones. Over the past several years we have been trying to figure out why. A sensible hypothesis is that biological materials are programmable. But we are very far from being able to program materials we create with this level of sophistication.  I will discuss our (largely unsuccessful) efforts to bridge this gap, though as of today I’m somewhat optimistic that we are arriving at a set of theoretical models that is rich enough to produce relevant emergent behavior.

Bio: I’ve been at Harvard for a long time. My favorite part of Harvard is the students.

Rui Duan, Biostatistics (HSPH) Title: Federated and transfer learning for healthcare data integration

Abstract: The growth of availability and variety of healthcare data sources has provided unique opportunities for data integration and evidence synthesis, which can potentially accelerate knowledge discovery and improve clinical decision-making. However, many practical and technical challenges, such as data privacy, high dimensionality, and heterogeneity across different datasets, remain to be addressed. In this talk, I will introduce several methods for the effective and efficient integration of multiple healthcare datasets in order to train statistical or machine learning models with improved generalizability and transferability. Specifically, we develop communication-efficient federated learning algorithms for jointly analyzing multiple datasets without the need of sharing patient-level data, as well as transfer learning approaches that leverage shared knowledge learned across multiple datasets to improve the performance of statistical models in target populations of interest. We will discuss both the theoretical properties and examples of implementation of our methods in real-world research networks and data consortia.

Bio: Rui Duan is an Assistant Professor of Biostatistics at the Harvard T.H. Chan School of Public Health. She received her Ph.D. in Biostatistics in May 2020 from the University of Pennsylvania. Her research interests focus on developing statistical, machine learning, and informatics tools for (1) efficient data integration in biomedical research, (2) understanding and accounting for the heterogeneity of biomedical data, and improving the generalizability and transferability of models across populations (3) advancing precision medicine research on rare diseases and underrepresented populations.

Yannai A. Gonczarowski, Economics (FAS) & Computer Science (SEAS) Title: The Sample Complexity of Up-to-ε Multi-Dimensional Revenue Maximization

Abstract: We consider the sample complexity of revenue maximization for multiple bidders in unrestricted multi-dimensional settings. Specifically, we study the standard model of n additive bidders whose values for m heterogeneous items are drawn independently. For any such instance and any ε > 0, we show that it is possible to learn an ε-Bayesian Incentive Compatible auction whose expected revenue is within ε of the optimal ε-BIC auction from only polynomially many samples.

Our fully nonparametric approach is based on ideas that hold quite generally, and completely sidestep the difficulty of characterizing optimal (or near-optimal) auctions for these settings. Therefore, our results easily extend to general multi-dimensional settings, including valuations that are not necessarily even subadditive, and arbitrary allocation constraints. For the cases of a single bidder and many goods, or a single parameter (good) and many bidders, our analysis yields exact incentive compatibility (and for the latter also computational efficiency). Although the single-parameter case is already well-understood, our corollary for this case extends slightly the state-of-the-art.

Joint work with S. Matthew Weinberg

Bio: Yannai A. Gonczarowski is an Assistant Professor of Economics and of Computer Science at Harvard University—the first faculty member at Harvard to have been appointed to both of these departments. Interested in both economic theory and theoretical computer science, Yannai explores computer-science-inspired economics: he harnesses approaches, aesthetics, and techniques traditionally originating in computer science to derive economically meaningful insights. Yannai received his PhD from the Departments of Math and CS, and the Center for the Study of Rationality, at the Hebrew University of Jerusalem, where he was advised by Sergiu Hart and Noam Nisan. Yannai is also a professionally-trained opera singer, having acquired a bachelor’s degree and a master’s degree in Classical Singing at the Jerusalem Academy of Music and Dance. Yannai’s doctoral dissertation was recognized with several awards, including the 2018 Michael B. Maschler Prize of the Israeli Chapter of the Game Theory Society, and the ACM SIGecom Doctoral Dissertation Award for 2018. For the design and implementation of the National Matching System for Gap-Year Programs in Israel, he was awarded the Best Paper Award at MATCH-UP’19 and the inaugural INFORMS AMD Michael H. Rothkopf Junior Researcher Paper Prize (first place) for 2020. Yannai is also the recipient of the inaugural ACM SIGecom Award for Best Presentation by a Student or Postdoctoral Researcher at EC’18. His first textbook, “Mathematical Logic through Python” (Gonczarowski and Nisan), which introduces a new approach to teaching the material of a basic Logic course to Computer Science students, tailored to the unique intuitions and strengths of this cohort of students, is forthcoming in Cambridge University Press.

Kosuke Imai, Government & Statistics (FAS) Title: Use of Simulation Algorithms for Legislative Redistricting Analysis and Evaluation

Abstract: After the 2020 Census, many states have been redrawing the boundaries of Congressional and state legislative districts. To evaluate the partisan and racial bias of redistricting plans, scholars have developed Monte Carlo simulation algorithms. The idea is to generate a representative sample of redistricting plans under a specified set of criteria and conduct a statistical hypothesis test by comparing a proposed plan with these simulated plans. I will give a brief overview of these redistricting simulation algorithms and discuss how they are used in real-world court cases.

Bio: Kosuke Imai is Professor in the Department of Government and Department of Statistics at Harvard University. Before moving to Harvard in 2018, Imai taught at Princeton University for 15 years where he was the founding director of the Program in Statistics and Machine Learning. Imai specializes in the development of statistical methods and machine learning algorithms and their applications to social science research. His areas of expertise include causal inference, computational social science, program evaluation, and survey methodology.

Sham M. Kakade, Computer Science (SEAS) & Statistics (FAS) Title: What is the Statistical Complexity of Reinforcement Learning?

Abstract: This talk will highlight much of the recent progress on the following fundamental question in the theory of reinforcement learning: what (representational or structural) conditions govern our ability to generalize and avoid the curse of dimensionality?  With regards to supervised learning, these questions are reasonably well understood, both practically and theoretically: practically, we have overwhelming evidence on the value of representational learning (say through modern deep networks) as a means for sample efficient learning, and, theoretically, there are well-known complexity measures (e.g. the VC dimension and Rademacher complexity) that govern the statistical complexity of learning.  Providing an analogous theory for reinforcement learning is far more challenging, where even characterizing structural conditions which support sample efficient generalization has been far less well understood, until recently.

This talk will survey recent advances towards characterizing when generalization is possible in RL, focusing on both necessary and sufficient conditions. In particular, we will introduce a new complexity measure, the Decision-Estimation Coefficient, that is proven to be necessary (and, essentially, sufficient) for sample-efficient interactive learning.

Bio: Sham Kakade is a professor at Harvard University and a co-director of the Kempner Institute for the Study of Artificial and Natural Intelligence.  He works on the mathematical foundations of machine learning and AI. Sham’s thesis helped lay the statistical foundations of reinforcement learning. With his collaborators, his additional contributions include foundational results on: policy gradient methods in reinforcement learning; regret bounds for linear bandit and Gaussian process bandit models; the tensor and spectral methods for latent variable models; and a number of convergence analyses for convex and non-convex algorithms.  He is the recipient of the ICML Test of Time Award, the IBM Pat Goldberg best paper award, and INFORMS Revenue Management and Pricing Prize. He has been program chair for COLT 2011.

Sham was an undergraduate at Caltech, where he studied physics and worked under the guidance of John Preskill in quantum computing. He completed his Ph.D. with Peter Dayan in computational neuroscience at the Gatsby Computational Neuroscience Unit. He was a postdoc with Michael Kearns at the University of Pennsylvania.

Seth Neel, Technology & Operations Management (HBS) Title: “Machine (Un)Learning” or Why Your Deployed Model Might Violate Existing Privacy Law

Abstract:  Businesses like Facebook and Google depend on training sophisticated models on user data. Increasingly—in part because of regulations like the European Union’s General Data Protection Act and the California Consumer Privacy Act—these organizations are receiving requests to delete the data of particular users. But what should that mean? It is straightforward to delete a customer’s data from a database and stop using it to train future models. But what about models that have already been trained using an individual’s data? These are not necessarily safe; it is known that individual training data can be exfiltrated from models trained in standard ways via model inversion attacks. In a series of papers we help formalize a rigorous notion of data-deletion and propose algorithms to efficiently delete user data from trained models with provable guarantees in both convex and non-convex settings.

Bio: Seth Neel is a first-year Assistant Professor in the TOM Unit at Harvard Business School, and Co-PI of the SAFR ML Lab in the D3 Institute, which develops methodology to incorporate privacy and fairness guarantees into techniques for machine learning and data analysis, while balancing other critical considerations like accuracy, efficiency, and interpretability. He obtained his Ph.D. from the University of Pennsylvania in 2020 where he was an NSF graduate fellow. His work has focused primarily on differential privacy, notions of fairness in a variety of machine learning settings, and adaptive data analysis.

Melanie Matchett Wood, Mathematics (FAS) Title: Understanding distributions of algebraic structures through their moments

Abstract: A classical tool of probability and analysis is to use the moments (mean, variance, etc.) of a distribution to recognize an unknown distribution of real numbers.  In recent work, we are interested in distributions of algebraic structures that can’t be captured in a single number.  We will explain one example, the fundamental group, that captures something about the shapes of possibly complicated or high dimensional spaces.  We are developing a new theory of the moment problem for random algebraic structures which helps to to identify distributions of such, such as fundamental groups of random three dimensional spaces.  This talk is based partly on joint work with Will Sawin.

Bio: Melanie Matchett Wood is a professor of mathematics at Harvard University and a Radcliffe Alumnae Professor at the Radcliffe Institute for Advanced Study.  Her work spans number theory, algebraic geometry, algebraic topology, additive combinatorics, and probability. Wood has been awarded a CAREER grant, a Sloan Research Fellowship, a Packard Fellowship for Science and Engineering, and the AWM-Microsoft Research Prize in Algebra and Number Theory, and she is a Fellow of the American Mathematical Society. In 2021, Wood received the National Science Foundation’s Alan T. Waterman Award, the nation’s highest honor for early-career scientists and engineers.


Details

Date:
May 17, 2022
Time:
9:00 am - 6:00 pm
Event Categories:
,

Venue

Science and Engineering Complex (SEC)
150 Western Ave, Allston, MA 02134 MA + Google Map
View Venue Website