BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CMSA - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://cmsa.fas.harvard.edu
X-WR-CALDESC:Events for CMSA
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241204T140000
DTEND;TZID=America/New_York:20241204T150000
DTSTAMP:20260510T152351
CREATED:20240907T180227Z
LAST-MODIFIED:20241212T205959Z
UID:10003410-1733320800-1733324400@cmsa.fas.harvard.edu
SUMMARY:Can Transformers Reason Logically? A Study in SAT-Solving
DESCRIPTION:New Technologies in Mathematics Seminar \nSpeaker: Leyan Pan\, Georgia Tech \nTitle: Can Transformers Reason Logically? A Study in SAT-Solving \nAbstract: Transformer-based LLMs have apparently demonstrated capabilities that resembles human reasoning. In our recent work\, we investigated the Boolean reasoning abilities of decoder-only Transformers equipped with Chain-of-Thought\, establishing that a Transformer model can decide all 3-SAT instances up to a bounded size (i.e.\, number of variables and clauses). In this talk\, I will first review recent studies that formally examine the expressiveness of Transformer models. Next\, I will explain how we establish an equivalence between Chain-of-Thought reasoning and algorithm\, in our case\, the DPLL SAT-solving algorithm. I will then discuss how to encode 3-SAT formulas and partial assignments as vectors so that the high-level operations in DPLL can be represented as vector operations and implemented using attention mechanisms within Transformers. Finally\, I will present experimental results that support our theoretical predictions. I will also address why standard Transformers can only solve reasoning problems of bounded length\, leading to failures in length-generalization\, and discuss potential solutions to overcome this limitation.
URL:https://cmsa.fas.harvard.edu/event/newtech_12424/
LOCATION:CMSA Room G10\, CMSA\, 20 Garden Street\, Cambridge\, MA\, 02138\, United States
CATEGORIES:New Technologies in Mathematics Seminar
ATTACH;FMTTYPE=image/png:https://cmsa.fas.harvard.edu/media/CMSA-NTM-Seminar-12.4.24.png
END:VEVENT
END:VCALENDAR