Kai-Wei Chang's Lab

UCLA NLP Seminar Series

Welcome to our weekly seminar series.

Kai-Wei Chang's Lab
Date Speaker Title
April 17 Mathew Finlayson The Search for an Unforgeable Language Model Signature
April 24 Idan Blank Understanding “Understanding” In Large Language Models
May 22 Eunice Jun TBD
May 29 Taylor Berg-Kirkpatrick TBD

🚀 Upcoming Talks

APR
24

Understanding “Understanding” In Large Language Models

Person IconIdan Blank

Clock IconApril 24, 2026, 2:00 PM PT

Location IconRm 289, Engineering VI

Speaker Bio:Idan Blank is a cognitive scientist and an assistant professor of Psychology at UCLA, with a joint appointment in the department of Linguistics. He received his PhD at MIT, working with Ev Fedorenko and Nancy Kanwisher, and did his post-doctoral training at the McGovern Institute for Brain Research (also with Ev). His lab uses functional neuroimaging, behavioral methods, and large language models, to study how different information sources are combined, and how different cognitive processes interact, during language comprehension in both biological and artificial minds.

Abstract:Do Large Language Models (LLMs) "understand" the language that they process? In this talk, I'll describe three studies that adapt experimental approaches from human psycho- and neuro-linguistics to test whether LLMs exhibit signatures of human-like comprehension. First, I will ask whether semantic information can "penetrate" and influence syntactic processing in LLMs—like it does in humans—or whether some syntactic processing stages in LLMs are "encapsulated" from meaning. Second, I will ask whether LLMs represent a fundamental aspect of linguistic meaning: distinguishing between agents and patients in sentences. Third, I will ask whether Large Vision-Language Models use visual context to interpret language in a manner that exhibits pragmatic-like sensitivity to whether expressions that refer to objects are felicitous, under-informative, or over-informative. These studies reveal both similarities and differences between LLMs and humans, breaking comprehension into theoretically-informed constructs and promoting a nuanced view of how, and in what sense, LLMs understand language.

🚀 Past Talks

APR
17

The Search for an Unforgeable Language Model Signature

Person IconMatthew Finlayson

Clock IconApril 17, 2026, 2:00 PM PT

Location IconRm 289, Engineering VI

Speaker Bio:Matthew Finlayson is a PhD candidate in computer science at the University of Southern California. He is advised by Swabha Swayamdipta and Xiang Ren. His research focuses on the security and interpretability of large language models, including work on unforgeable signatures for language models and information leakage from model interfaces. He is supported by an NSF Graduate Research Fellowship and was previously a pre-doctoral researcher at the Allen Institute for AI.

Abstract:As language models become ubiquitous, reliably attributing text to specific models is an increasingly important challenge in model forensics. Existing approaches—watermarking, text classifiers, backdoor fingerprints, and input/output matching—each require significant assumptions such as provider cooperation, training data access, or prompt knowledge. We present an alternative approach based on naturally occurring signatures in language model outputs. In particular, language model parameters impose geometric constraints on their outputs, and these structures serve as unique model identifiers. Early work on model signatures based on linear constraints suffered from a major drawback: an adversary could "forge" a signature by reconstructing the constraints from model outputs. We explore elliptical and ranking constraints, which move us closer to provably unforgeable (or forgery-resistant) language model signatures via connections to high dimensional ellipse fitting and oriented matroid theory. These results point toward truly unforgeable signatures that every language model inherently possesses, requiring no provider implementation and no access to model internals.

Organizing Committee

Faculty

Prof. Kai-Wei Chang

Prof. Nanyun Peng

Prof. Saadia Gabriel

Prof. Elisa Kreiss

Students

Tanmay Parekh

Mohsen Fayyaz

Ashima Suvarna

Yingjia Wan

Salman Rahman

Lucas Bandarkar