This is a course on geometric aspects of deep learning theory. Broadly speaking, we’ll investigate the question: How might human-interpretable concepts be expressed in the geometry of their data encodings, and how does this geometry interact with the computational units and higher-level algebraic structures in various parameterized function classes, especially neural network classes?

Sep. 10-Nov. 1: Tuesdays and Thursdays 2:30-3:45 pm, Harvard CMSA Math & ML program, 20 Garden St; Cambridge, Room G10

During the portion of the course Sep. 10-Nov. 1 presented as part of the Math and Machine Learning program, we will explore the current state of research on mechanistic interpretability of transformers, the architecture underlying large language models like Chat-GPT.

Topics in Deep Learning Theory

CMSA Room G10 CMSA, 20 Garden Street, Cambridge, MA, United States

Topics in Deep Learning Theory Eli Grigsby

Topics in Deep Learning Theory

CMSA Room G10 CMSA, 20 Garden Street, Cambridge, MA, United States

Topics in Deep Learning Theory Eli Grigsby

Topics in Deep Learning Theory

CMSA Room G10 CMSA, 20 Garden Street, Cambridge, MA, United States

Topics in Deep Learning Theory Eli Grigsby

Topics in Deep Learning Theory

CMSA Room G10 CMSA, 20 Garden Street, Cambridge, MA, United States

Topics in Deep Learning Theory Eli Grigsby