I’m a fifth-year PhD student in the Stanford AI Lab (SAIL), where I'm advised by Dan Jurafsky and work on behavior-bound machine learning.

Machine learning is not a sterile industrial process; much in the way that it is hardware-bound and software-bound, it is also shaped by the behavior of real-world actors such as workers, firms, and states. By borrowing from fields like economics, my work tries to formalize this behavior and create algorithms, tools, and platforms that are compatible with actual actors, not just idealized ones.

Highlights:

  • SHP, the first large-scale public dataset of human preferences over text (5M examples in v2.0)
  • Archangel, the largest suite of human feedback-aligned LLMs
  • Dynaboard, an evaluation-as-a-service platform used to host Dynabench, BabyLM, and others
  • HALOs, a framework for creating prospect-theoretic losses for alignment

I have received an ICML 2022 Outstanding Paper award, a Facebook Fellowship, and an NSERC PGS-D during my PhD. Prior to Stanford, I was a National Scholar at the University of Toronto.

Recent Work (full list)

Principal-Agent Problems in Data Creation Datasets are born of a conflict in incentives between those who pay for data (principals) and those who produce it (agents). As a result, they are much simpler than the real-world problems they purport to reflect. How can we shrink this gap? I work on frameworks for understanding dataset difficulty and use them to create datasets like SHP, the first large-scale dataset of human preferences over text. SHP is one of the few datasets used for the alignment of Llama-2 (one of the most widely-used LLMs).

Pluralistic Model Alignment Alignment is monolithic: one model aligned with one method on one set of preferences is served to all users. I draw connections between behavioral economics and model alignment so that we can make alignment more pluralistic, discovering for instance, that we can draw from a whole family of human-aware losses (HALOs) instead of just using DPO or PPO. One of these HALOs, called KTO, has become the most popular option for aligning LLMs with unpaired and imbalanced human feedback, the most common type in production settings.

Cost-Sensitive Evaluation The ML models that researchers consider the best are often not the ones deployed by firms in the real world. But why? The landscape in which models are deployed is heterogeneous, and firms are willing to sacrifice performance for memory efficiency, controllability, and more. I model these tradeoffs and design evaluation pipelines that better simulate real-world considerations, such as Dynaboard. Dynaboard has been used to host DADC (Dynamic Adversarial Data Collection), DataPerf, BabyLM, Flores, and many other challenges.