| CARVIEW |
Will Liang
willjhliang@gmail.com (github, linkedin, gscholar, twitter, notes)
I am a PhD student at the University of California, Berkeley, advised by Pieter Abbeel and Jitendra Malik, and supported by the NSF Graduate Research Fellowship. My research interests include robot learning, generative models, and multimodal representations for embodied agents that learn from human priors and unstructured, self-directed experience.
Previously, I studied at the University of Pennsylvania, working with Jason Ma, Dinesh Jayaraman, Osbert Bastani, Kostas Daniilidis, and Jianbo Shi.
In the evenings, you might also find me playing guitar or painting.
Updates
- Feb 2025: Joined NVIDIA as a research intern, working with Joel Jang, Jim Fan, and Yuke Zhu.
- Nov 2024: Trained parkour skills via an adaptive and automatic environment curriculum—Eurekaverse.
- May 2024: Globe-walked our dog around campus, trained with LLM-powered sim-to-real—DrEureka.
- Oct 2023: Taught a robot hand some pen spinning tricks with evolutionary reward design—Eureka.
- Aug 2023: Met awesome people at CMMRS, hosted by MPI-SWS, Cornell, and University of Maryland.
Research
Selected representative publications are listed below. For the complete set, please see Google Scholar.
Articulate Anything: Automatic Modeling of Articulated Objects via a Vision-Language Foundation Model
Long Le, Jason Xie, William Liang, Hung-Ju Wang, Yue Yang, Yecheng Jason Ma, Kyle Vedder, Arjun Krishna, Dinesh Jayaraman, Eric Eaton
International Conference on Learning Representations (ICLR), 2025
Projects
The following are some other projects I particularly enjoyed. More can be found on Github.