Kai-Wei Chang's Lab

UCLA NLP Seminar Series - Archive

Past talks from our weekly seminar series.

Kai-Wei Chang's Lab

Past Talks

NOV
5

Auditing, Understanding, and Leveraging Large Language Models

Person IconRobin Jia

Clock IconNovember 5, 2024, 4:15 PM

Location Icon3400 Boelter Hall

Co-located with CS 201 Seminar

Speaker Bio: Robin Jia is an Assistant Professor of Computer Science at the University of Southern California. He received his Ph.D. in Computer Science from Stanford University, where he was advised by Percy Liang. He has also spent time as a visiting researcher at Facebook AI Research, working with Luke Zettlemoyer and Douwe Kiela. He is interested broadly in natural language processing and machine learning, with a focus on scientifically understanding NLP models in order to improve their reliability. Robin’s work has received best paper awards at ACL and EMNLP.

Abstract: The rise of large language models offers opportunities to both scientifically study these complex systems and apply them in novel ways. In this talk, I will describe my group’s recent work along these lines. First, I will discuss data watermarks, a statistically rigorous technique for auditing a language model’s training data based only on black-box model queries. Then, we will investigate how language models memorize training data: based on results from two complementary benchmarks, I will demonstrate the viability of localizing memorized data to a sparse subset of neurons. Next, I will provide a mechanistic account of how pre-trained language models use Fourier features to solve arithmetic problems, and how pre-training plays a critical role in these mechanisms. Finally, I will show how to leverage the complementary strengths of large language models and symbolic solvers to handle complex planning tasks.

NOV
1

Building Accountable NLP Models for Social Good

Person IconJieyu Zhao

Clock IconNovember 1, 2024, 2:00 PM

Location Icon289, Engineering VI

Speaker Bio: Jieyu Zhao is an assistant professor of Computer Science Department at University of Southern California where she is leading the LIME lab. Prior to that, she was an NSF Computing Innovation Fellow at University of Maryland, College Park. Jieyu received her Ph.D. from Computer Science Department, UCLA. Her research interest lies in fairness of ML/NLP models. Her research has been covered by news media such as Wires, The Daily Mail and so on. She was invited by UN-WOMEN Beijing on a panel discussion about gender equality and social responsibility.

Abstract: The rapid advancement of large language models (LLMs) has unlocked a myriad of possibilities for positive societal impact, ranging from enhancing accessibility and communication to supporting disaster response and public health initiatives. However, the deployment of these technologies also raises critical concerns regarding accountability, fairness, transparency, and ethical use. In this talk, I will discuss our efforts for auditing NLP models, detecting and mitigating biases, and understanding how LLMs make decisions. We hope to open the conversation to foster a community-wide effort towards more accountable and inclusive NLP practices.

OCT
25

Translating images into words: From truthful to useful

Person IconElisa Kreiss

Clock IconOctober 25, 2024, 2:00 PM

Location IconMAXWELL Room 57-124, Engineering IV

Zoom IconZoom Link

Speaker Bio: Elisa Kreiss is an Assistant Professor of Communication at UCLA and the lab director of the Coalas (Computation and Language for Society) Lab. Previously, she completed a PhD in Linguistics at Stanford, where she was a member of Stanford’s NLP group and the Stanford Data Science Center for Open and REproducible Science (CORES). Elisa investigates how we produce and understand language situated in the visual world. Her work combines tools from natural language processing, psycholinguistics, and human-computer interaction to advance our understanding of how communicative context shapes language use. Her research has direct applications to image accessibility – the challenge of (automatically) generating image descriptions for blind and low-vision users. Elisa’s work has been supported by several Google Research Awards, the National Science Foundation, Stanford’s Human-centered AI initiative, and Stanford’s Accelerator for Learning.

Abstract: Developing Vision-Language Models (VLMs) that can easily translate between the linguistic and visual modality in human-like ways has many useful applications, including making visual content accessible to blind and low vision individuals, detecting misinformation, and combating visual illiteracy. While the current generation of VLMs has quickly risen to show human-level performance on many existing benchmarks, there remains a remarkable gap between these scores and how useful the models are found to be in practice. In this talk, I will present recent and ongoing work which suggests that in order to develop and understand the merit of Vision-Language Models for downstream application, we need to define tasks and evaluation metrics that assess the communicative usefulness of the generated texts. Specifically, I will focus on the challenge of generating image descriptions and argue for moving the goal post from what can be said about an image to the fundamentally pragmatic question of what should be said about it. Based on a variety of experiments with sighted and blind and low-vision participants, I will show that the pragmatic notion of contextual relevance is a core pillar of generating human-like image descriptions, provide evidence that our current tasks and evaluation tools in NLP remain unhelpful in uncovering these context effects, and present work that starts addressing this gap. Taken together, this work provides fundamental insights into how people communicate about the visual world, and shows how we can use those insights to advance VLMs for social impact, such as non-visual accessibility.