3rd LoG NYC Workshop

Claudia Shi

Affiliation

Columbia University

Talk Title

Understanding the internal mechanism of LLMs

Abstract

This talk examines current hypotheses about the internal structure of large language models, focusing on the circuit hypothesis and the linear representation hypothesis. We'll look at recent work aiming to identify functional substructures and interpretable representations within these models. The talk will also explore how techniques from causal inference and probabilistic machine learning can support the analysis of these mechanisms.

Bio

Claudia Shi is a final-year Ph.D. student in Computer Science at Columbia University, advised by David Blei. Her research focuses on advancing the scientific understanding of large language models (LLMs) and their responsible deployment.

Website

https://www.claudiashi.com