Kai-Wei Chang's Lab

UCLA NLP Seminar Series

Welcome to our weekly seminar series.

Kai-Wei Chang's Lab
Date Speaker Title
October 3 Brihi Joshi Towards Richer User Signals for Personalization
October 10 Jacob Andreas Just Asking Questions
October 17 Aviral Kumar TBD
October 31 Rose Yu TBD
November 14 Arman Cohan TBD
November 21 Sherry Yang TBD

🚀 Upcoming Talks

OCT
3

Towards Richer User Signals for Personalization

Person IconBrihi Joshi

Clock IconOctober 3, 2025, 2:00 PM

Location Icon289, Engineering VI

Speaker Bio: Brihi Joshi is a final-year PhD student in Computer Science at the University of Southern California, advised by Xiang Ren and Swabha Swayamdipta. Her research focuses on human-AI interaction, with an emphasis on personalization, where she designs and evaluates interactive systems that adapt to users in meaningful and useful ways. Her work has been supported by fellowships from Apple and Amazon.

Abstract: Personalization is gaining attention across domains, with different works exploring signals ranging from user demographics to interaction history. The talk will begin by showing that common signals such as prompts and instructions are underspecified for truly useful personalization, leading only to surface-level changes; for example, failing to adapt to learners with different educational backgrounds. We will then present how LLMs can be used to synthesize richer signals, such as user explanations, that drive more meaningful personalization. Finally, we will share ongoing work on training systems to actively elicit useful user signals, and touch upon open problems on how we can obtain and use these user signals.

OCT
10

Just Asking Questions

Person IconJacob Andreas

Clock IconOctober 10, 2025, 2:00 PM

Zoom IconTo Be Announced

Speaker Bio:Jacob Andreas is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial Intelligence Laboratory. His research aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has received a Sloan fellowship, an NSF CAREER award, MIT's Junior Bose and Kolokotrones teaching awards, and paper awards at ACL, ICML and NAACL.

Abstract: In the age of deep networks, "learning" almost invariably means "learning from examples". We train language models with human-generated text and labeled preference pairs, mage classifiers with large datasets of images, and robot policies with rollouts or demonstrations. When human learners acquire new concepts and skills, we often do so with richer supervision, especially in the form of language---we learn new concepts from examples accompanied by descriptions or definitions, and new skills from demonstrations accompanied by instructions. Current language models (LMs) support a limited form of language-based teaching via prompting, but it remains challenging to use natural language supervision to apply global, persistent changes to learned models. This talk will focus on two recent projects aimed at more effectively supervising LMs using language: first, on *eliciting* new information (by asking questions to human users of LMs); second, on *updating* language models to incorporate new information (by using LMs to automatically ask and answer questions about information implied by, but not explicitly stated in, training data). If time permits, I'll also discuss some applications of these techniques to educational settings (where we can optimize questions for human, rather than machine, learning). This is joint work with Belinda Li, Alex Tamkin, Noah Goodman, Feyza Akyürek, Ekin Akyürek, Leshem Choshen, Derry Wijaya, and Alexis Ross.

Organizing Committee

Faculty

Prof. Kai-Wei Chang

Prof. Nanyun Peng

Prof. Saadia Gabriel

Prof. Elisa Kreiss

Students

Tanmay Parekh

Mohsen Fayyaz

Ashima Suvarna

Salman Rahman