I am a Research Lead at Meta TBD Labs. Before that, I was a research lead at OpenAI, and a core contributor to o1, o3, gpt-4.1, gpt-4.5, and gpt5. I did my PhD at Stanford University advised by Percy Liang and Tengyu Ma. Here is my CV.
Model Releases
o3 (livestream and blogpost)
o1 (core contributor to the RL algorithms)
GPT-4.1 (research co-lead)
GPT-4.5 (core contributor to pretraining evals)
Selected Publications
(See all publications at this link)
Learning to Reason with LLMs. OpenAI 2024. (core contributor to the RL algorithms)
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution. Ananya Kumar, Aditi Raghunathan, Robbie Jones, Tengyu Ma, Percy Liang. International Conference on Learning Representations (ICLR Oral) 2022. 1.6% oral acceptance rate. [Slides]
Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation. Kendrick Shen*, Robbie Jones*, Ananya Kumar*, Sang Michael Xie*, Jeff Z. HaoChen, Tengyu Ma, Percy Liang. International Conference on Machine Learning (ICML Long Talk) 2022. 2.1% long talk acceptance rate. [Slides]
Understanding Self-Training for Gradual Domain Adaptation. Ananya Kumar, Tengyu Ma, Percy Liang. International Conference on Machine Learning (ICML) 2020.
Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. Neural Information Processing Systems (NeurIPS Spotlight) 2019. 3.0% spotlight / oral acceptance rate.
Students Advised
I have been lucky to co-advise a number of talented undergraduate and master’s students at Stanford, who have written some very insightful papers:
- Fahim Tajwar: ICML Workshop 2021, ICLR 2023, Preprint 2023
- Next: PhD Student at CMU
- Kendrick Shen: ICLR 2022, ICML 2022
- Next: ML Research Engineer at Genesis Therapeutics
- Michael Sun: Intern project on continual learning
- Next: PhD Student at MIT
- Robbie Jones: ICLR 2021, ICLR 2022, ICML 2022
- Next: ML Software Engineer at GridSpace
- Vaish Srivastava: Ongoing projects on uncertainty quantification
I have also mentored or proposed research directions for a number of fantastic PhD students who have taught me a lot:
- Sachin Goyal (CMU): CVPR 2023 (Robust fine-tuning)
- Jeff Z. HaoChen: NeurIPS 2022 (Pretraining for robustness)
- Nelson Liu: ACL 2023 (Robustness of NLP models)
- Alex Li (CMU): ICLR 2025 (Robust machine learning)