I am a researcher interested in large language model pre-training at Google DeepMind. I most recently led scaling laws work that became part of the Gemini 1.5 model family. I previously worked on Gemini 1, PaLM-2 and trained some of the first preference models used in Project Bard (now known also as Gemini). I am interested in all things related to creating useful AI systems.
I did my PhD at UC Berkeley in the EECS department where I was fortunate to be advised by Prof. Sergey Levine. Before starting at Berkeley, I had the pleasure of working at Google
as part of the inaugural Brain Residency Program (now AI Residency).