Workshop on Foundations of Computational Science

On August 29-31, 2019 the Center of Mathematical Sciences and Applications will be hosting a workshop on Foundations of Computational Science. The workshop will be held in room G10 of the CMSA, located at 20 Garden Street, Cambridge, MA.

Please register here. 

Speakers:

 

Schedule: 

Thursday, August 29

Time

Speaker

Title/Abstract

8:30 – 9:00am

Breakfast

 

9:00 – 9:15am

Opening

 

9:15 – 9:40am

Bo Zhang

Title: AI and Mathematics 

9:40 – 10:05am

Tat Seng Chua

 

10:05 – 10:35am

Group Photo and Coffee Break

 

10:35 – 10:50am

Maosong Sun

 Title: Deep Learning-based Chinese Language Computation at Tsinghua University: Progress and Challenges

10:50 – 11:05am

Minlie Huang

 Title: Controllable Text Generation

11:05 – 11:20am

Yang Liu

 Title: Natural Language Translation

11:20 – 11:45am

Yike Guo

 Title: Data Efficiency in Machine Learning

11:45 – 12:10pm

Zuowei Shen

 

12:10 – 1:45pm

Lunch

 

1:45 – 2:00pm

Wenwu Zhu

 Title: Explainable media and network representation

2:00 – 2:15pm

Wee Sun Lee

 

2:15 – 2:30pm

Jun Zhu

Title: Particle-based Inference for Bayesian deep learning 

2:30 – 3:00pm

Coffee Break

 

3:00 – 3:15pm

Hang Su

Title: Adversarial attacks in deep learning

3:15 – 3:30pm

Ke Deng

Title: understanding complicated patterns of Chinese texts with very weak training

3:30 – 4:00pm

David Gu

 Title: A Geometric View to Optimal Transportation and Generative Adversarial Models

4:00 – 4:30pm

Donald Rubin

Title: Relevant Statistical Evaluations When Comparing Procedures for Analyzing Data

 

Friday, August 30

Time

Speaker

Title/Abstract

8:30 – 9:00am

Breakfast

 

9:00-9:25am

Qianxiao Li

 

9:25-10:15am

Sarah Adel Bargal

Title: Grounding Deep Models for Improved Decision Making

Abstract: Deep models are state-of-the-art for many computer vision tasks including object classification, action recognition, and captioning. As Artificial Intelligence systems that utilize deep models are becoming ubiquitous, it is becoming crucial to explain (ground) why they make certain decisions, and utilize such explanations (grounding) to further improve model performance. In this talk, I will present: (1) Frameworks in which grounding guides decision-making on the fly at test time by questioning whether the utilized evidence is ‘reasonable’, and during learning through the exploitation of new pathways in deep models. (2) A formulation that simultaneously grounds evidence in space and time, in a single pass, using top-down saliency. This visualizes the spatiotemporal cues that contribute to a deep recurrent neural network’s classification/captioning output. Based on these spatiotemporal cues, segments within a video that correspond with a specific action, or phrase from a caption, could be localized without explicitly optimizing/training for these tasks.

10:15-10:40am

Xiaoqin Wang

 Title: Encoding and decoding auditory information by the brain

10:40-11:00am

Coffee Break

 

11:00-11:15am

Yuanchun Shi

 Title: From Human Action Data To User Input Intention

11:15-11:30am

Bin Xu

 Title: AI Practice for Gaokao: Knowledge Graph Construction for Chinese K12 Education

11:30 – 11:45am

Peng Cui

 Title: Stable Learning: The Convergence of Causal Inference and Machine Learning

11:45am – 12:00pm

Liu Hanzhong

Title: Penalized regression-adjusted average treatment effect estimates in randomized experiments

12:00 – 1:30pm

Lunch

 

1:30 – 2:15pm

Cegiz Pehlevan

 

2:15 – 3:00pm

Sergiy Verstyuk

 

3:00 – 3:30pm

Coffee

 

3:30 – 4:15pm

Xiao-Li Meng

 Title: Artificial Bayesian Monte Carlo Integration: A Practical Resolution to the Bayesian (Normalizing Constant) Paradox

Abstract: Advances in Markov chain Monte Carlo in the past 30 years have made Bayesian analysis a routine practice. However, there is virtually no practice of performing Monte Carlo integration from the Bayesian perspective; indeed, this problem has earned the “paradox” label in the context of computing normalizing constants (Wasserman, 2013). We first use the modeling-what-we-ignore idea of Kong et al. (2003) to explain that the crux of the paradox is not with the likelihood theory, which is essentially the same as for a standard non-parametric probability/density estimation (Vardi, 1985); though via using group theory, it provides a richer framework for modeling the trade-off between statistical efficiency and computational efficiency. But there is a real Bayesian paradox: Bayesian analysis cannot be applied exactly for solving Bayesian computation, because to perform the exact Bayesian Monte Carlo integration would require more computation than needed to solve the original Monte Carlo problem. We then show that there is a practical resolution to this paradox using the profile likelihood obtained in Kong et al. (2006) and that this approximation is second-order valid asymptotically. We also investigate a more computationally efficient approximation via an artificial likelihood of Geyer (1994). This artificial likelihood approach is only first-order valid, but there is a computationally trivial adjustment to render its second-order validity. We demonstrate empirically the efficiency of these approximated Bayesian estimators, compared to the usual frequentist-based Monte Carlo estimators, such as bridge sampling estimators (Meng and Wong, 1996). [This is a joint work with Masatoshi Uehara.]

PDF with references

 

Saturday, August 31

Time

Speaker

Title/Abstract

8:30 – 9:00am

Breakfast

 

9:00 – 9:45am

Brian Kulis

 

9:45 – 10:30am

Justin Solomon

Title:  Linking the Theory and Practice of Optimal Transport

 

Abstract:  Optimal transport is a theory linking probability to geometry, with applications across computer graphics, machine learning, and scientific computing.  While transport has long been recognized as a valuable theoretical tool, only recently have we developed the computational machinery needed to apply it to practical computational problems.  In this talk, I will discuss efforts with my students to scale up transport and related computations, showing that the best algorithm and model for this task depends on details of the application scenario.  In particular, we will consider settings in representation learning using entropically-regularized transport, Bayesian inference using semi-discrete transport, and graphics/PDE using dynamical Eulerian models.

10:30 – 11:00am

Coffee

 

11:00 – 11:45am

Mirac Suzgun

 

11:45am – 12:30pm

Jiafeng Chen &

Suproteem Sarkar

Title: Robust and Extensible Deep Learning for Economic and Financial Applications

12:30 – 2:00pm

Lunch

 

2:00 – 2:45pm

Scott Kominers

 

 

Related Posts