From Algebraic Geometry to Vision and AI: A Symposium Celebrating the Mathematical Work of David Mumford

On August 18 and 20, 2018, the Center of Mathematic Sciences and Applications and the Harvard University Mathematics Department will host a conference on From Algebraic Geometry to Vision and AI: A Symposium Celebrating the Mathematical Work of David Mumford. The talks will take place in Science Center, Hall B.

 Saturday, August 18th:  A day of talks on Vision, AI and brain sciences
 Monday, August 20th: a day of talks on Math

The full schedule for this event, including talk titles and abstracts, will be posted here when available.

Speakers:

Organizers:

Please register here. 

For a list of lodging options convenient to the Center, please visit our recommended lodgings page.

A downloadable version of the program is available here. 

Saturday, August 18, 2018

Human and machine Intelligence: Geometry, Vision and AI

Time Speaker Title/Abstract
8:30 – 9:00am Breakfast
9:00 – 9:20am Symposium Kickoff
History and Perspectives
9:20 – 9:45am Jayant Shah Northeastern Title: The accuracy of solar eclipse prediction in ancient and medieval astronomy

Abstract: Predicting solar eclipses was one of the important challenges in ancient and medieval astronomy. Using a statistical approach, David Mumford tested the accuracy of the Chinese algorithm for predicting solar eclipses as formulated in Shoushili. Using David’s code, I have carried out a similar analysis of the Indian Tantrasangraha and the Greek Almagest. In this talk, I will describe the method and compare the accuracy of the three algorithms.

9:45 – 10:10am     Laurent Younes

Johns Hopins

Title: Some recent developments on elastic metrics between curves.

Abstract: The presentation will review some recent results on H1 metrics between curves in two or more dimensions, retrieving and slightly expanding various new contributions made over the past few years.

10:10 – 10:30am Break
10:30 – 10:55am Alan Yuille

Johns Hopkins

Title: David Mumford’s Contributions to Vision

Abstract: This talk gives an overview of David Mumford’s contributions to vision focusing in particular on his work at Harvard and its relationship to current research in vision. David’s work can be roughly categorized into five research areas: (1) Segmentation and Image Parsing. (2) Pattern Theory as a unifying perspective. (3) Statistics of Images. (4) Theories of Shape. (5) Biology and Psychophysics of Vision. Attempting to capture David’s key contributions to these areas is, perhaps, equivalent to summarizing Shakespeare to ten minutes but will help set the stage for later talks.

10:55 – 11:15am Break
Brain, Neural and Cognitive Sciences
11:15 – 11:40 Michael Miller  Johns Hopkins Title: On some of the complex geometries and mapping problems of the human brain
11:40 – 1:40pm Lunch
1:40 – 2:05pm Tai Sing Lee CMU Title: The Visual Cortex as a Compositional Graphical Model

Abstract: A central tenant of Mumford’s theory of the visual cortex is that the feedforward connections in the hierarchical visual system are performing analysis and the feedback connections are performing synthesis. The synthesis “explains away” the representation in each level, allowing only the residues/error signals to propagate up the hierarchy for further analysis. The hierarchical visual system itself can be conceptualized as a compositional graphical model for capturing the rich complexity of data in the natural environment. This represents a creative synthesis of ideas from computer vision for conceptualizing the computations in the visual cortex. I will discuss my recent findings in neurophysiology that resonate with some of Mumford’s ideas: (1) In addition to the known neural circuit of boundary detection and completion, we found that there exists a complementary circuit in the primary visual cortex for surface interpolation, as suggested by the energy functional approach for segmentation; (2) the connectivity of this functional circuit can be predicted by the co-occurrence relationships of 3D surface features in natural scenes, and learned automatically by graphical models; (3) the dictionary elements in V1 contain many highly specific local complex feature detectors that could support high-resolution visual reasoning and the construction of a system that can flexibly and dynamically compose simple concepts to form more complex and abstract concepts; (4) with learning, early visual cortical neurons become suppressed in the later part of their responses to the learned familiar global concepts relative to novel global image concepts, reminiscent of the phenomenon of “explaining away” in Mumford’s theory.

2:05 – 2:30pm Josh Tenenbaum  MIT Title: Seeing the world’s structure: Pattern theory in the era of probabilistic programs and deep learning
Vision and Pattern Theory
2:30 – 2:55pm David Gu

Stony Brook

 Title: 
2:55- 3:15pm Break
3:15 – 3:40pm Stuart Geman Brown Title: 
3:40 – 4:00pm Coffee break
4:00 – 4:25pm Ying Nian Wu UCLA Title: A Tale of Three Families of Probabilistic Models

Abstract: I will review three families of probabilistic models in computer vision that can be traced back to the pattern theory developed by Grenander and advocated by Mumford for vision. They are the discriminative models for classification, the descriptive models that are based on descriptive statistics and energy functions, and the generative models that are based on latent variables. While the first family of models are usually trained in supervised setting, the other two families of models can be learned in unsupervised setting. I will review recent work, including our own work, on relating and learning these models.

4:25 – 4:55pm Jitendra Malik Berkeley / FAIR Title: Learning to See and Act in a 3D world
4:55 – 5:25pm Song-Chun Zhu UCLA Title: Artificial Intelligence: The Era of Big Integration

Abstract: Mumford moved from AG to AI in the 1980s to pursue a unified mathematical theory of Intelligence. The recent advances in fields, such as vision, language and learning have inspired renewed interest in academics and the public for developing general AI technologies that are capable of communicating and collaborating with humans. This talk will discuss the current state of AI, and an observation that AI is entering an era of big integration, embracing six research disciplines: vision, language, cognition, learning, robotics and social ethics (game theories). It calls for a unified representation and inference framework for a wide range of tasks, and a new paradigm of “small-data for big-tasks” in contrast to the current deep learning paradigm of “big-data for small-tasks”. The talk will also discuss the layered infrastructures underneath human communication, collaborations, and learning, including the theory of mind; the challenges of constructing a cognitive architecture for human-machine communication and teaming; and recent progress on visual commonsense reasoning.

5:25 – 5:55pm Q&A

Monday, August 20, 2018

Algebraic Geometry and Shape

Time Speaker Title
9:00 – 9:05am Opening
9:05 – 9:55am Janos Kollar Princeton Title: Moduli spaces of algebraic varieties
Abstract: The aim of the talk is to discuss how Mumford’s works on the moduli of curves developed into the current moduli theory of higher dimensional varieties.
9:55 – 10:10am Break
10:10 – 11:00am Emanuele Macri Northeastern Title: Bridgeland stability and applications

Abstract: One of the key ideas in the theory of derived categories, due to Bondal and Orlov in the 90’s, is that the derived category of coherent sheaves on a smooth projective variety should contain very important information on the geometry of the variety itself, for example on its birational properties.

A conjectural way to obtain such information is via the theory of moduli spaces of objects in the derived category, generalizing the existing theory of moduli spaces of vector bundles developed by Mumford, Narasimhan, Seshadri, Gieseker, Maruyama, and Simpson, among others. In 2003, motivated by previous work in High Energy Physics by Douglas, Bridgeland introduced the notion of stability condition for derived categories; this allows to define and study such moduli spaces of objects.

In this talk, I will give an introduction to Bridgeland’s theory, focusing in particular to applications of the theory to problems in Algebraic Geometry. For instance, I will present Bayer’s new proof of the Brill-Noether Theorem and a new proof for a theorem of Gruson-Peskine and Harris on the genus of space curves (which is joint work with Benjamin Schmidt).

11:00 – 11:30am Break
11:30 – 12:20pm Aaron Pixton MIT Title: The tautological ring

Abstract: The tautological ring of the moduli space of smooth curves was introduced by Mumford in the 1980s in analogy with the cohomology of Grassmannians. I will discuss what is now known (or still unknown) about the structure of this ring.

12:20 – 2:10pm Lunch
2:10 – 3:00pm Burt Totaro UCLA Title: Rationality and algebraic cycles

Abstract: We survey the recent applications of the theory of algebraic cycles to the problem of determining which algebraic varieties are rational or stably rational.

3:00 – 3:10pm Break
3:10 – 4:00pm Avi Wigderson Princeton Title: Optimization, Computational Complexity and Invariant Theory 

Abstract: I will discuss a recent sequence of works, connecting the areas in the title with each other (and with others below). At the heart of the connections are the design and analysis of a new meta-algorithm for efficiently solving the null-cone and orbit-closure intersection problems for a large family of group actions. This family turns out to capture natural problems in other areas, including the word problem for skew fields in Non-commutative algebra, the feasibility of Brascamp-Lieb inequalities in analysis, and the membership in entanglement polytopes in Quantum Information Theory.

4:00 – 4:30pm Break
4:30 – 5:20pm Peter Michor Vienna Title: `Shape spaces’ alias `Moduli spaces in the differentiable category’

Abstract:On the space Emb of all smooth embeddings (more general: immer- sions) from a compact manifold M into a Riemannian manifold (N,g ̄) act the Lie group of all diffeomorphisms of M from the right, and various groups of diffeomor- phisms of N from the left. Quotiening out the right action leads to a prime example of “shape space”, also called the “differentiable Chow variety” or the “nonlinear Grassmannian”, consisting of all submanifolds of N of type M. Invariant Riemann- ian metrics on Emb lead to Riemannian metrics on shape space, whose geodesic distances can be used to differentiate between shapes, and whose curvatures affect statistics of shapes. The left action, via right invariant Riemannian metrics on the diffeomorphism groups of N, also induces various metrics on shape spaces which have found many applications from paelontology to computational anatomy.

In this overview talk I will illustrate many aspects of this circle of ideas; the results came out of collaboration with David Mumford. I will always use the space of differentiable immersed plane curves as the most basic example.

PDF of Abstract

 

Related Posts