Julian Michael

I am a postdoc at the NYU Center for Data Science working with Sam Bowman. I earned my PhD from the University of Washington where my advisor was Luke Zettlemoyer. I work on a variety of things, mostly focused on AI alignment or formal semantics of natural language.

In alignment, I focus on scalable oversight and agent alignment, from the lens of task formulation, data collection, and evaluation methodology. I’m especially interested in using debate as a training and evaluation paradigm, and recently tested this with humans. I’m also interested in pushing the boundaries of difficult evaluations for scalable oversight, as in our recent release of GPQA.

In language, I work on ways to use data and machine learning to help us do a better science of language, particularly when it comes to syntax and semantics. I lay out this scientific paradigm in my thesis, described best in my talk at the Big Picture Workshop. In constructing the building blocks for this, I have developed approaches to crowdsourcing annotation for syntactic parsing, semantic role labeling, and predicate-argument structure.

More broadly, I am interested in the Science of AI and NLP, using empirical methods to improve our understanding of intelligent behavior and language use. Along these lines, I have worked on broad-coverage and fine-grained evaluation of models, unsupervised discovery of linguistic structure, and explicitly incorporating ambiguity into task design. See my publications for a full list.

Selected publications (see all)

Selected talks (see all)

The Case for Scalable, Data-Driven Theory: A Paradigm for Scientific Progress in NLP
This is the best entry point to my thesis work, laying out my proposal for how to use machine learning to do better science, specifically in the case of syntax and semantics.
video slides paper
Delivered Dec 07, 2023 as the best paper talk at The Big Picture Workshop.
An Introduction to NLP, for Scientists
A summary of the contemporary state of NLP and a (novel at the time, as far as I'm aware) proposal for how to use language models for scientific data analysis. Includes spicy takes on the relationship between deep learning and psychiatric drugs and more.
video slides colab
Delivered Apr 05, 2023 as a guest lecture for Madalina Vlasceanu's graduate regression class at NYU.
Philosophical Foundations of AI Ethics
An introduction to foundational issues in AI ethics from a philosophical perspective. I try to connect contemporary views and disagreements to their philosophical roots in consequentialism, deontology, social contract theory, and critical theory.
video slides
Delivered Jan 25, 2022 for Yulia Tsvetkov's UW class on Computational Ethics in NLP.
Representing Meaning with Question-Answer Pairs
A 10-minute, accessible colloquium talk summarizing some of my early PhD work on crowdsourcing representations of language structure and meaning. My experience with the projects described in this talk led me to invest further in QA-SRL.
video slides
Delivered Nov 07, 2019 at the UW Allen School Colloquium.

Other writings

,