Together with the School of Engineering and Applied Sciences, the CMSA will be hosting a lecture series on the Frontiers in Applied Mathematics and Computation. Talks in this series will aim to highlight current research trends at the interface of applied math and computation and will explore the application of these trends to challenging scientific, engineering, and societal problems.
Lectures will take place on March 25, April 1, and April 29, 2021.
The schedule below will be updated as talks are confirmed.
10:00 – 11:00am ET
|Joseph M. Teran||Title: Affine-Particle-In-Cell with Conservative Resampling and Implicit Time Stepping for Surface Tension Forces|
Abstract: The Particle-In-Cell (PIC) method of Harlow is one of the first and most widely used numerical methods for Partial Differential Equations (PDE) in computational physics. Its relative efficiency, versatility and intuitive implementation have made it particularly popular in computational incompressible flow, plasma physics and large strain elastoplasticity. PIC is characterized by its dual particle/grid (Lagrangian/Eulerian) representation of material where particles are generally used to track material transport in a Lagrangian way and a structured Eulerian grid is used to discretize remaining spatial derivatives in the PDE. I will discuss the importance of conserving linear and angular momentum when switching between these two representations and the recent Affine-Particle-In-Cell (APIC) extension to PIC designed for this conservation. I will also discuss a recent APIC technique for discretizing surface tension forces and their linearizations needed for implicit time stepping. This technique is characterized by a novel surface resampling strategy and I will discuss a generalization of the APIC conservation to this setting.
9:00 – 10:00am ET
|George Biros||Title: Inverse biophysical modeling and its application to neurooncology|
Abstract: A predictive, patient-specific, biophysical model of tumor growth would be an invaluable tool for causally connecting diagnostics with predictive medicine. For example, it could be used for tumor grading, characterization of the tumor microenvironment, recurrence prediction, and treatment planning, e.g., chemotherapy protocol or enrollment eligibility for clinical trials. Such a model also would provide an important bridge between molecular drivers of tumor growth and imaging-based phenotypic signatures, and thus, help identify and quantify mechanism-based associations between these two. Unfortunately, such a predictive biophysical model does not exist. Existing models undergoing clinical evaluation are too simple–they do not even capture the MRI phenotype. Although many highly complex models have been proposed, the major hurdle in deploying them clinically is their calibration and validation.
In this talk, I will discuss the challenges related to the calibration and validation of biophysical models, and in particular the mathematical structure of the underlying inverse problems. I will also present a new algorithm that localizes the tumor origin within a few millimeters.
10:00 – 11:00am ET
|Samory K. Kpotufe||Title: From Theory to Clustering |
Abstract: Clustering is a basic problem in data analysis, consisting of partitioning data into meaningful groups called clusters. Practical clustering procedures tend to meet two criteria: flexibility in the shapes and number of clusters estimated, and efficient processing. While many practical procedures might meet either of these criteria in different applications, general guarantees often only hold for theoretical procedures that are hard if not impossible to implement. A main aim is to address this gap.
We will discuss two recent approaches that compete with state-of-the-art procedures, while at the same time relying on rigorous analysis of clustering. The first approach fits within the framework of density-based clustering, a family of flexible clustering approaches. It builds primarily on theoretical insights on nearest-neighbor graphs, a geometric data structure shown to encode local information on the data density. The second approach speeds up kernel k-means, a popular Hilbert space embedding and clustering method. This more efficient approach relies on a new interpretation – and alternative use – of kernel-sketching as a geometry-preserving random projection in Hilbert space.
Finally, we will present recent experimental results combining the benefits of both approaches in the IoT application domain.
The talk is based on various works with collaborators Sanjoy Dasgupta, Kamalika Chaudhuri, Ulrike von Luxburg, Heinrich Jiang, Bharath Sriperumbudur, Kun Yang, and Nick Feamster.
12:00 – 1:00pm ET
|Jonas Martin Peters||Title: Causality and Distribution Generalization|
Abstract: Purely predictive methods do not perform well when the test distribution changes too much from the training distribution. Causal models are known to be stable with respect to distributional shifts such as arbitrarily strong interventions on the covariates, but do not perform well when the test distribution differs only mildly from the training distribution. We discuss anchor regression, a framework that provides a trade-off between causal and predictive models. The method poses different (convex and non-convex) optimization problems and relates to methods that are tailored for instrumental variable settings. We show how similar principles can be used for inferring metabolic networks. If time allows, we discuss extensions to nonlinear models and theoretical limitations of such methodology.
1:00 – 2:00pm ET
|Laura Grigori||Title: Randomization and communication avoiding techniques for large scale linear algebra|
Abstract: In this talk we will discuss recent developments of randomization and communication avoiding techniques for solving large scale linear algebra operations. We will focus in particular on solving linear systems of equations and we will discuss a randomized process for orthogonalizing a set of vectors and its usage in GMRES, while also exploiting mixed precision. We will also discuss a robust multilevel preconditioner that allows to further accelerate solving large scale linear systems on parallel computers.