Loading Events

« All Events

  • This event has passed.

Workshop on Invariance and Geometry in Sensation, Action and Cognition

April 15, 2019 @ 9:15 am - April 17, 2019 @ 10:00 am

Mathbio

As part of the program on Mathematical Biology a workshop on Invariance and Geometry in Sensation, Action and Cognition will take place on April 15-17, 2019.

Legend has it that above the door to Plato’s Academy was inscribed “Μηδείς άγεωµέτρητος είσίτω µον τήν στέγην”, translated as “Let no one ignorant of geometry enter my doors”. While geometry and invariance has always been a cornerstone of mathematics, it has traditionally not been an important part of biology, except in the context of aspects of structural biology. The premise of this meeting is a tantalizing sense that geometry and invariance are also likely to be important in (neuro)biology and cognition. Since all organisms interact with the physical world, this implies that as neural systems extract information using the senses to guide action in the world, they need appropriately invariant representations that are stable, reproducible and capable of being learned. These invariances are a function of the nature and type of signal, its corruption via noise, and the method of storage and use.

This hypothesis suggests many puzzles and questions: What representational geometries are reflected in the brain? Are they learned or innate? What happens to the invariances under realistic assumptions about noise, nonlinearity and finite computational resources? Can cases of mental disorders and consequences of brain damage be characterized as break downs in representational invariances? Can we harness these invariances and sensory contingencies to build more intelligent machines? The aim is to revisit these old neuro-cognitive problems using a series of modern lenses experimentally, theoretically and computationally, with some tutorials on how the mathematics and engineering of invariant representations in machines and algorithms might serve as useful null models.

In addition to talks, there will be a set of tutorial talks on the mathematical description of invariance (P.J. Olver), the computer vision aspects of invariant algorithms (S. Soatto), and the neuroscientific and cognitive aspects of invariance (TBA). The workshop will be held in room G10 of the CMSA, located at 20 Garden Street, Cambridge, MA. This workshop is organized by L. Mahadevan (Harvard), Talia Konkle (Harvard), Samuel Gershman (Harvard), and Vivek Jayaraman (HHMI).

For a list of lodging options convenient to the Center, please visit our recommended lodgings page.

List of registrants

Videos

Tentative Speaker List:

Schedule:

Monday, April 15

Time Speaker Title/Abstract
8:30 – 9:00am Breakfast
9:00 – 9:15am Welcome and Introduction
9:15 – 10:00am Vivek Jayaraman Title: Insect cognition: Small tales of geometry & invariance

Abstract: Decades of field and laboratory experiments have allowed ethologists to discover the remarkable sophistication of insect behavior. Over the past couple of decades, physiologists have been able to peek under the hood to uncover sophistication in insect brain dynamics as well. In my talk, I will describe phenomena that relate to the workshop’s theme of geometry and invariance. I will outline how studying insects —and flies in particular— may enable an understanding of the neural mechanisms underlying these intriguing phenomena.

10:00 – 10:45am Elizabeth Torres Title: Connecting Cognition and Biophysical Motions Through Geometric Invariants and Motion Variability

Abstract: In the 1930s Nikolai Bernstein defined the degrees of freedom (DoF) problem. He asked how the brain could control abundant DoF and produce consistent solutions, when the internal space of bodily configurations had much higher dimensions than the space defining the purpose(s) of our actions. His question opened two fundamental problems in the field of motor control. One relates to the uniqueness or consistency of a solution to the DoF problem, while the other refers to the characterization of the diverse patterns of variability that such solution produces.

In this talk I present a general geometric solution to Bernstein’s DoF problem and provide empirical evidence for symmetries and invariances that this solution provides during the coordination of complex naturalistic actions. I further introduce fundamentally different patterns of variability that emerge in deliberate vs. spontaneous movements discovered in my lab while studying athletes and dancers performing interactive actions. I here reformulate the DoF problem from the standpoint of the social brain and recast it considering graph theory and network connectivity analyses amenable to study one of the most poignant developmental disorders of our times: Autism Spectrum Disorders.

I offer a new unifying framework to recast dynamic and complex cognitive and social behaviors of the full organism and to characterize biophysical motion patterns during migration of induced pluripotent stem cell colonies on their way to become neurons.

10:45 – 11:15am Coffee Break
11:15 – 12:00pm Peter Olver Title: Symmetry and invariance in cognition — a mathematical perspective”

Abstract: Symmetry recognition and appreciation is fundamental in human cognition.  (It is worth speculating as to why this may be so, but that is not my intent.) The goal of these two talks is to survey old and new mathematical perspectives on symmetry and invariance.  Applications will arise from art, computer vision, geometry, and beyond, and will include recent work on 2D and 3D jigsaw puzzle assembly and an ongoing collaboration with anthropologists on the analysis and refitting of broken bones.  Mathematical prerequisites will be kept to a bare minimum.

12:00 – 12:45pm Stefano Soatto/Alessandro Achille Title: Information in the Weights and Emergent Properties of Deep Neural Networks

Abstract: We introduce the notion of information contained in the weights of a Deep Neural Network  and show that it can be used to control and describe the training process of DNNs, and can explain how properties, such as invariance to nuisance variability and disentanglement, emerge naturally in the learned representation. Through its dynamics, stochastic gradient descent (SGD) implicitly regularizes the information in the weights, which can then be used to bound the generalization error through the PAC-Bayes bound. Moreover, the information in the weights can be used to defined both a topology and an asymmetric distance in the space of tasks, which can then be used to predict the training time and the performance on a new task given a solution to a pre-training task.

While this information distance models difficulty of transfer in first approximation, we show the existence of non-trivial irreversible dynamics during the initial transient phase of convergence when the network is acquiring information, which makes the approximation fail. This is closely related to critical learning periods in biology, and suggests that studying the initial convergence transient can yield important insight beyond those that can be gleaned from the well-studied asymptotics.

12:45 – 2:00pm Lunch
2:00 – 2:45pm Anitha Pasupathy Title: Invariant and non-invariant representations in mid-level ventral visual cortex

My laboratory investigates how visual form is encoded in area V4, a critical mid-level stage of form processing in the macaque monkey. Our goal is to reveal how V4 representations underlie our ability to segment visual scenes and recognize objects. In my talk I will present results from two experiments that highlight the different strategies used by the visual to achieve these goals. First, most V4 neurons exhibit form tuning that is exquisitely invariant to size and position, properties likely important to support invariant object recognition. On the other hand, form tuning in a majority of neurons is also highly dependent on the interior fill. Interestingly, unlike primate V4 neurons, units in a convolutional neural network trained to recognize objects (AlexNet) overwhelmingly exhibit fill-outline invariance. I will argue that this divergence between real and artificial circuits reflects the importance of local contrast in parsing visual scenes and overall scene understanding.

2:45 – 3:30pm Jacob Feldman Title: Bayesian skeleton estimation for shape representation and perceptual organization

Abstract: In this talk I will briefly summarize a framework in which shape representation and perceptual organization are reframed as probabilistic estimation problems. The approach centers around the goal of identifying the skeletal model that best “explains” a given shape. A Bayesian solution to this problem requires identifying a prior over shape skeletons, which penalizes complexity, and a likelihood model, which quantifies how well any particular skeleton model fits the data observed in the image. The maximum-posterior skeletal model thus constitutes the most “rational” interpretation of the image data consistent with the given assumptions. This approach can easily be extended and generalized in a number of ways, allowing a number of traditional problems in perceptual organization to be “probabilized.” I will briefly illustrate several such extensions, including (1) figure/ground and grouping (3) 3D shape and (2) shape similarity.

3:30 – 4:00pm Tea Break
4:00 – 4:45pm Moira Dillon Title: Euclid’s Random Walk: Simulation as a tool for geometric reasoning through development

Abstract: Formal geometry lies at the foundation of millennia of human achievement in domains such as mathematics, science, and art. While formal geometry’s propositions rely on abstract entities like dimensionless points and infinitely long lines, the points and lines of our everyday world all have dimension and are finite. How, then, do we get to abstract geometric thought? In this talk, I will provide evidence that evolutionarily ancient and developmentally precocious sensitivities to the geometry of our everyday world form the foundation of, but also limit, our mathematical reasoning. I will also suggest that successful geometric reasoning may emerge through development when children abandon incorrect, axiomatic-based strategies and come to rely on dynamic simulations of physical entities. While problems in geometry may seem answerable by immediate inference or by deductive proof, human geometric reasoning may instead rely on noisy, dynamic simulations.

4:45 – 5:30pm Michael McCloskey Title: Axes and Coordinate Systems in Representing Object Shape and Orientation

Abstract: I describe a theoretical perspective in which a) object shape is represented in an object-centered reference frame constructed around orthogonal axes; and b) object orientation is represented by mapping the object-centered frame onto an extrinsic (egocentric or environment-centered) frame.  I first show that this perspective is motivated by, and sheds light on, object orientation errors observed in neurotypical children and adults, and in a remarkable case of impaired orientation perception. I then suggest that orientation errors can be used to address questions concerning how object axes are defined on the basis of object geometry—for example, what aspects of object geometry (e.g., elongation, symmetry, structural centrality of parts) play a role in defining an object principal axis?

5:30 – 6:30pm Reception

 

Tuesday, April 16

Time Speaker Title/Abstract
8:30 – 9:00am Breakfast
9:00 – 9:45am Peter Olver Title: Symmetry and invariance in cognition — a mathematical perspective”

Abstract: Symmetry recognition and appreciation is fundamental in human cognition.  (It is worth speculating as to why this may be so, but that is not my intent.) The goal of these two talks is to survey old and new mathematical perspectives on symmetry and invariance.  Applications will arise from art, computer vision, geometry, and beyond, and will include recent work on 2D and 3D jigsaw puzzle assembly and an ongoing collaboration with anthropologists on the analysis and refitting of broken bones.  Mathematical pre

9:45 – 10:30am Stefano Soatto/Alessandro Achille Title: Information in the Weights and Emergent Properties of Deep Neural Networks

Abstract: We introduce the notion of information contained in the weights of a Deep Neural Network  and show that it can be used to control and describe the training process of DNNs, and can explain how properties, such as invariance to nuisance variability and disentanglement, emerge naturally in the learned representation. Through its dynamics, stochastic gradient descent (SGD) implicitly regularizes the information in the weights, which can then be used to bound the generalization error through the PAC-Bayes bound. Moreover, the information in the weights can be used to defined both a topology and an asymmetric distance in the space of tasks, which can then be used to predict the training time and the performance on a new task given a solution to a pre-training task.

While this information distance models difficulty of transfer in first approximation, we show the existence of non-trivial irreversible dynamics during the initial transient phase of convergence when the network is acquiring information, which makes the approximation fail. This is closely related to critical learning periods in biology, and suggests that studying the initial convergence transient can yield important insight beyond those that can be gleaned from the well-studied asymptotics.

10:30 – 11:00am Coffee Break
11:00 – 11:45am Jeannette Bohg Title: On perceptual representations and how they interact with actions and physical representations

Abstract: I will discuss the hypothesis that perception is active and shaped by our task and our expectations on how the world behaves upon physical interaction. Recent approaches in robotics follow this insight that perception is facilitated by physical interaction with the environment. First, interaction creates a rich sensory signal that would otherwise not be present. And second, knowledge of the regularity in the combined space of sensory data and action parameters facilitate the prediction and interpretation of the signal. In this talk, I will present two examples from our previous work where a predictive task facilitates autonomous robot manipulation by biasing the representation of the raw sensory data. I will present results on visual but also haptic data.

11:45 – 12:30pm Dagmar Sternad Title: Exploiting the Geometry of the Solution Space to Reduce Sensitivity to Neuromotor Noise

Abstract: Control and coordination of skilled action is frequently examined in isolation as a neuromuscular problem. However, goal-directed actions are guided by information that creates solutions that are defined as a relation between the actor and the environment. We have developed a task-dynamic approach that starts with a physical model of the task and mathematical analysis of the solution spaces for the task. Based on this analysis we can trace how humans develop strategies that meet complex demands by exploiting the geometry of the solution space. Using three interactive tasks – throwing or bouncing a ball and transporting a “cup of coffee” – we show that humans develop skill by: 1) finding noise-tolerant strategies and channeling noise into task-irrelevant dimensions, 2) exploiting solutions with dynamic stability, and 3) optimizing predictability of the object dynamics. These findings are the basis for developing propositions about the controller: complex actions are generated with dynamic primitives, attractors with few invariant types that overcome substantial delays and noise in the neuro-mechanical system.

12:30 – 2:00pm Lunch
2:00 – 2:45pm Sam Ocko Title: Emergent Elasticity in the Neural Code for Space

Abstract: To navigate a novel environment, animals must construct an internal map of space by combining information from two distinct sources: self-motion cues and sensory perception of landmarks. How do known aspects of neural circuit dynamics and synaptic plasticity conspire to construct such internal maps, and how are these maps used to maintain representations of an animal’s position within an environment. We demonstrate analytically how a neural attractor model that combines path integration of self-motion with Hebbian plasticity in synaptic weights from landmark cells can self-organize a consistent internal map of space as the animal explores an environment. Intriguingly, the emergence of this map can be understood as an elastic relaxation process between landmark cells mediated by the attractor network during exploration. Moreover, we verify several experimentally testable predictions of our model, including: (1) systematic deformations of grid cells in irregular environments, (2) path-dependent shifts in grid cells towards the most recently encountered landmark, (3) a dynamical phase transition in which grid cells can break free of landmarks in altered virtual reality environments and (4) the creation of topological defects in grid cells. Taken together, our results conceptually link known biophysical aspects of neurons and synapses to an emergent solution of a fundamental computational problem in navigation, while providing a unified account of disparate experimental observations.

2:45 – 3:30pm Tatyana Sharpee Title: Hyperbolic geometry of the olfactory space

Abstract: The sense of smell can be used to avoid poisons or estimate a food’s nutrition content because biochemical reactions create many by-products. Thus, the production of a specific poison by a plant or bacteria will be accompanied by the emission of certain sets of volatile compounds. An animal can therefore judge the presence of poisons in the food by how the food smells. This perspective suggests that the nervous system can classify odors based on statistics of their co-occurrence within natural mixtures rather than from the chemical structures of the ligands themselves. We show that this statistical perspective makes it possible to map odors to points in a hyperbolic space. Hyperbolic coordinates have a long but often underappreciated history of relevance to biology. For example, these coordinates approximate distance between species computed along dendrograms, and more generally between points within hierarchical tree-like networks. We find that both natural odors and human perceptual descriptions of smells can be described using a three-dimensional hyperbolic space. This match in geometries can avoid distortions that would otherwise arise when mapping odors to perception. We identify three axes in the perceptual space that are aligned with odor pleasantness, its molecular boiling point and acidity. Because the perceptual space is curved, one can predict odor pleasantness by knowing the coordinates along the molecular boiling point and acidity axes.

3:30 – 4:00pm Tea Break
4:00 – 4:45pm Ed Connor Title: Representation of solid geometry in object vision cortex

Abstract: There is a fundamental tension in object vision between the 2D nature of retinal images and the 3D nature of physical reality. Studies of object processing in the ventral pathway of primate visual cortex have focused mainly on 2D image information. Our latest results, however, show that representations of 3D geometry predominate even in V4, the first object-specific stage in the ventral pathway. The majority of V4 neurons exhibit strong responses and clear selectivity for solid, 3D shape fragments. These responses are remarkably invariant across radically different image cues for 3D shape: shading, specularity, reflection, refraction, and binocular disparity (stereopsis). In V4 and in subsequent stages of the ventral pathway, solid shape geometry is represented in terms of surface fragments and medial axis fragments. Whole objects are represented by ensembles of neurons signaling the shapes and relative positions of their constituent parts. The neural tuning dimensionality of these representations includes principal surface curvatures and their orientations, surface normal orientation, medial axis orientation, axial curvature, axial topology, and position relative to object center of mass. Thus, the ventral pathway implements a rapid transformation of 2D image data into explicit representations 3D geometry, providing cognitive access to the detailed structure of physical reality.

4:45 – 5:30pm L. Mahadevan Title: Simple aspects of geometry and probability in perception

Abstract: Inspired by problems associated with noisy perception, I will discuss two questions: (i) how might we test people’s perception of probability in a geometric context ? (ii) can one construct invariant descriptions of 2D images using simple notions of probabilistic geometry? Along the way, I will highlight other questions that the intertwining of geometry and probability raises in a broader perceptual context.


Wednesday, April 17

Time Speaker Title/Abstract
8:30 – 9:00am Breakfast
9:00 – 9:45am Gily Ginosar Title: The 3D geometry of grid cells in flying bats

Abstract: The medial entorhinal cortex (MEC) contains a variety of spatial cells, including grid cells and border cells. In 2D, grid cells fire when the animal passes near the vertices of a 2D spatial lattice (or grid), which is characterized by circular firing-fields separated by fixed distances, and 60 local angles – resulting in a hexagonal structure. Although many animals navigate in 3D space, no studies have examined the 3D volumetric firing of MEC neurons. Here we addressed this by training Egyptian fruit bats to fly in a large room (5.84.62.7m), while we wirelessly recorded single neurons in MEC. We found 3D border cells and 3D head-direction cells, as well as many neurons with multiple spherical firing-fields. 20% of the multi-field neurons were 3D grid cells, exhibiting a narrow distribution of characteristic distances between neighboring fields – but not a perfect 3D global lattice. The 3D grid cells formed a functional continuum with less structured multi-field neurons. Both 3D grid cells and multi-field cells exhibited an anatomical gradient of spatial scale along the dorso-ventral axis of MEC, with inter-field spacing increasing ventrally – similar to 2D grid cells in rodents. We modeled 3D grid cells and multi-field cells as emerging from pairwise-interactions between fields, using an energy potential that induces repulsion at short distances and attraction at long distances. Our analysis shows that the model explains the data significantly better than a random arrangement of fields. Interestingly, simulating the exact same model in 2D yielded a hexagonal-like structure, akin to grid cells in rodents. Together, the experimental data and preliminary modeling suggest that the global property of grid cells is multiple fields that repel each other with a characteristic distance-scale between adjacent fields – which in 2D yields a global hexagonal lattice while in 3D yields only local structure but no global lattice.

Gily Ginosar 1 , Johnatan Aljadeff 2 , Yoram Burak 3 , Haim Sompolinsky 3 , Liora Las 1 , Nachum Ulanovsky 1

(1) Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel

(2) Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK

(3) The Edmond and Lily Safra Center for Brain Sciences, and Racah Institute of Physics, The Hebrew

University of Jerusalem, Jerusalem, 91904, Israel

9:45 – 10:30am Sandro Romani Title: Neural networks for 3D rotations

Abstract: Studies in rodents, bats, and humans have uncovered the existence of neurons that encode the orientation of the head in 3D. Classical theories of the head-direction (HD) system in 2D rely on continuous attractor neural networks, where neurons with similar heading preference excite each other, while inhibiting other HD neurons. Local excitation and long-range inhibition promote the formation of a stable “bump” of activity that maintains a representation of heading. The extension of HD models to 3D is hindered by complications (i) 3D rotations are non-commutative (ii) the space described by all possible rotations of an object has a non-trivial topology. This topology is not captured by standard parametrizations such as Euler angles (e.g. yaw, pitch, roll). For instance, with these parametrizations, a small change of the orientation of the head could result in a dramatic change of neural representation. We used methods from the representation theory of groups to develop neural network models that exhibit patterns of persistent activity of neurons mapped continuously to the group of 3D rotations. I will further discuss how these networks can (i) integrate vestibular inputs to update the representation of heading, and (ii) be used to interpret “mental rotation” experiments in humans.

This is joint work with Hervé Rouault (CENTURI) and Alon Rubin (Weizmann Institute of Science).

10:30 – 11:00am Coffee Break
11:00 – 11:45am Sam Gershman Title: The hippocampus as a predictive map

Abstract: A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity and policy dependence in place cells suggests that the representation is not purely spatial. I approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? I show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, I argue that entorhinal grid cells encode a low-dimensionality basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.

11:45 – 12:30pm Lucia Jacobs Title: The adaptive geometry of a chemosensor: the origin and function of the vertebrate nose

Abstract: A defining feature of a living organism, from prokaryotes to plants and animals, is the ability to orient to chemicals. The distribution of chemicals, whether in water, air or on land, is used by organisms to locate and exploit spatially distributed resources, such as nutrients and reproductive partners. In animals, the evolution of a nervous system coincided with the evolution of paired chemosensors. In contemporary insects, crustaceans, mollusks and vertebrates, including humans, paired chemosensors confer a stereo olfaction advantage on the animal’s ability to orient in space. Among vertebrates, however, this function faced a new challenge with the invasion of land. Locomotion on land created a new conflict between respiration and spatial olfaction in vertebrates. The need to resolve this conflict could explain the current diversity of vertebrate nose geometries, which could have arisen due to species differences in the demand for stereo olfaction. I will examine this idea in more detail in the order Primates, focusing on Old World primates, in particular, the evolution of an external nose in the genus Homo.

12:30 – 1:30pm Lunch
1:30 – 2:15pm Talia Konkle Title: The shape of things and the organization of object-selective cortex

Abstract: When we look at the world, we effortlessly recognize the objects around us and can bring to mind a wealth of knowledge about their properties. In part 1, I’ll present evidence that neural responses to objects are organized by high-level dimensions of animacy and size, but with underlying neural tuning to mid-level shape features. In part 2, I’ll present evidence that representational structure across much of the visual system has the requisite structure to predict visual behavior. Together, these projects suggest that there is a ubiquitous “shape space” mapped across all of occipitotemporal cortex that underlies our visual object processing capacities. Based on these findings, I’ll speculate that the large-scale spatial topography of these neural responses is critical for pulling explicit content out of a representational geometry.

2:15 – 3:00pm Vijay Balasubramanian Title: Becoming what you smell: adaptive sensing in the olfactory system

Abstract: I will argue that the circuit architecture of the early olfactory system provides an adaptive, efficient mechanism for compressing the vast space of odor mixtures into the responses of a small number of sensors.  In this view, the olfactory sensory repertoire employs a disordered code to compress a high dimensional olfactory space into a low dimensional receptor response space while preserving distance relations between odors.  The resulting representation is dynamically adapted to efficiently encode the changing environment of volatile molecules.  I will show that this adaptive combinatorial code can be efficiently decoded by systematically eliminating candidate odorants that bind to silent receptors.  The resulting algorithm for “estimation by elimination” can be implemented by a neural network that is remarkably similar to the early olfactory pathway in the brain.  The theory predicts a relation between the diversity of olfactory receptors and the sparsity of their responses that matches animals from flies to humans.   It also predicts specific deficits in olfactory behavior that should result from optogenetic manipulation of the olfactory bulb.

3:00 – 3:45pm Ila Feite Title: Invariance, stability, geometry, and flexibility in spatial navigation circuits

Abstract: I will describe how the geometric invariances or symmetries of the external world are reflected in the symmetries of neural circuits that represent it, using the example of the brain’s networks for spatial navigation. I will discuss how these symmetries enable spatial memory, evidence integration, and robust representation. At the same time, I will discuss how these seemingly rigid circuits with their inscribed symmetries can be harnessed to represent a range of spatial and non-spatial cognitive variables with high flexibility.

3:45 – 4:00pm L Mahadevan – summary

Details

Start:
April 15, 2019 @ 9:15 am
End:
April 17, 2019 @ 10:00 am
Event Category:

Venue

CMSA
20 Garden Street
Cambridge, MA 02138 United States
+ Google Map