BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CMSA - ECPv6.15.17//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://cmsa.fas.harvard.edu
X-WR-CALDESC:Events for CMSA
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230920T103000
DTEND;TZID=America/New_York:20230920T113000
DTSTAMP:20260405T070206
CREATED:20240223T104903Z
LAST-MODIFIED:20240223T104903Z
UID:10002851-1695205800-1695209400@cmsa.fas.harvard.edu
SUMMARY:Exact Results in Flat Band Hubbard Models
DESCRIPTION:Topological Quantum Matter Seminar \nSpeaker: Jonah Herzog-Arbeitman\, Princeton University \nTitle: Exact Results in Flat Band Hubbard Models \nAbstract: Flat bands\, like those in the kagome lattice or twisted bilayer graphene\, are a natural setting for studying strongly coupled physics since the interaction strength is the only energy scale in the problem. They can exhibit unconventional behavior in the multi-orbital case: the mean-field theory of flat band attractive Hubbard models shows the possibility of superconductivity even though the Fermi velocity of the bands is strictly zero. However\, it is not necessary to resort to this approximation. We demonstrate that the groundstates and low-energy excitations of a large class of attractive Hubbard models are exactly solvable\, offering a rare\, microscopic view of their physics. The solution reveals the importance of quantum geometry in escaping (some of) BCS phenomenology within a tractable and nontrivial strong coupling theory.
URL:https://cmsa.fas.harvard.edu/event/tqms_92023/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:Topological Quantum Matter Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-Topological-Seminar-09.20.23.docx-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230920T140000
DTEND;TZID=America/New_York:20230920T150000
DTSTAMP:20260405T070206
CREATED:20240227T083355Z
LAST-MODIFIED:20240227T083355Z
UID:10002873-1695218400-1695222000@cmsa.fas.harvard.edu
SUMMARY:The TinyStories Dataset: How Small Can Language Models Be And Still Speak Coherent
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Ronen Eldan\, Microsoft Research \nTitle: The TinyStories Dataset: How Small Can Language Models Be And Still Speak Coherent \nAbstract: While generative language models exhibit powerful capabilities at large scale\, when either the model or the number of training steps is too small\, they struggle to produce coherent and fluent text: Existing models whose size is below a few billion parameters often do not generate coherent text beyond a few sentences. Hypothesizing that one of the main reasons for the strong reliance on size is the vast breadth and abundance of patterns in the datasets used to train those models\, this motivates the following question: Can we design a dataset that preserves the essential elements of natural language\, such as grammar\, vocabulary\, facts\, and reasoning\, but that is much smaller and more refined in terms of its breadth and diversity? \nIn this talk\, we introduce TinyStories\, a synthetic dataset of short stories that only contain words that 3 to 4-year-olds typically understand\, generated by GPT-3.5/4. We show that TinyStories can be used to train and analyze language models that are much smaller than the state-of-the-art models (below 10 million parameters)\, or have much simpler architectures (with only one transformer block)\, yet still produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar\, and demonstrate certain reasoning capabilities. We also show that the trained models are substantially more interpretable than larger ones\, as we can visualize and analyze the attention and activation patterns of the models\, and show how they relate to the generation process and the story content. We hope that TinyStories can facilitate the development\, analysis and research of language models\, especially for low-resource or specialized domains\, and shed light on the emergence of language capabilities in LMs. \n 
URL:https://cmsa.fas.harvard.edu/event/nt-92023/
LOCATION:Virtual
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-09.20.2023.png
END:VEVENT
END:VCALENDAR