CS Colloquium: “(How) Do LLMs Reason?” Subbarao Kambhampati
September 5, 2025 11 a.m.–12:15 p.m.
Location
Shanahan Center, Auditorium
320 E. Foothill Blvd.
Claremont, CA 91711
Contact
Morgan McArdle
mmcardle@g.hmc.edu
909.607.0299
Details
“(How) Do LLMs Reason?”
Large Language Models, auto-regressively trained on the digital footprints of humanity, have shown impressive abilities in generating coherent text completions for a vast variety of prompts. While they excelled from the beginning in producing completions in appropriate style, factuality and reasoning/planning abilities remained their Achilles heel (premature claims notwithstanding). More recently a breed of approaches dubbed “reasoning models” (LRMs). These approaches leverage two broad and largely independent ideas: (i) test-time inference—which involves getting the base LLMs do more work than simply providing the most likely completion, including using them in generate and test approaches such as LLM-Modulo (that pair LLM generation with a bank of verifiers) and (ii) post-training methods—which go beyond simple auto-regressive training on web corpora by collecting, filtering and training on derivational traces (that are often anthropomorphically referred to as “chains of thought” and “reasoning traces”), and modifying the base LLM with it using supervised finetuning or reinforcement learning methods. Their success on benchmarks notwithstanding, there are significant questions and misunderstandings about these methods–including whether they can provide correctness guarantees, whether they do adaptive computation, whether the intermediate tokens they generate can be viewed as reasoning traces in any meaningful sense, and whether they are costly Rube Goldberg reasoning machines that incrementally compile verifier signal into the generator or truly the start of a golden era of general purpose System 1+2 AI systems. Drawing from our ongoing work in planning, I will present a broad perspective on these approaches and their promise and limitations.
Speaker
Subbarao Kambhampati is a professor of computer science at Arizona State University. Kambhampati studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems. He is a fellow of Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and a recent recipient of the AAAI Patrick H. Winston Outstanding Educator award. He served as the president of the Association for the Advancement of Artificial Intelligence, a trustee of the International Joint Conference on Artificial Intelligence, the chair of AAAS Section T (Information, Communication and Computation), and a founding board member of Partnership on AI. Kambhampati’s research as well as his views on the progress and societal impacts of AI have been featured in multiple national and international media outlets. He can be followed on Twitter @rao2z.