I am CEO and Co-Founder of Mecha and a Computer Science PhD student at UC Berkeley, advised by Pieter Abbeel. My goal is to develop a generally intelligent agent capable of autonomous learning in the real world. I believe scaling the human physical data footprint is on the critical path to resolving Moravec’s Paradox. Follow me on X for updates on my work.
I received my BS (Honors) and MS in computer science with specialization in artificial intelligence from Stanford University, conducting research under the adivsorship of Fei-Fei Li as well as Kuan Fang and Animesh Garg. I interned at NVIDIA conducting reinforcement learning, robotics, and simulation research mentored by Yuke Zhu and Jim Fan. Before then, I worked on machine learning and automation at Google and digital transformation at McKinsey and Company.
@article{adeniji2025feelforcecontactdrivenlearning,title={Feel the Force: Contact-Driven Learning from Humans},author={Adeniji, Ademi and Chen, Zhuoran and Liu, Vincent and Pattabiraman, Venkatesh and Bhirangi, Raunaq and Haldar, Siddhant and Abbeel, Pieter and Pinto, Lerrel},year={2025},eprint={2506.01944},archiveprefix={arXiv},primaryclass={cs.RO},url={https://arxiv.org/abs/2506.01944},}
EgoZero: Robot Learning from Smart Glasses
Vincent Liu*, Ademi Adeniji*, Haotian Zhan*, Raunaq Bhirangi, and 2 more authors
@article{liu2025egozerorobotlearningsmart,title={EgoZero: Robot Learning from Smart Glasses},author={Liu, Vincent and Adeniji, Ademi and Zhan, Haotian and Bhirangi, Raunaq and Abbeel, Pieter and Pinto, Lerrel},year={2025},eprint={2505.20290},primaryclass={cs.RO},url={https://arxiv.org/abs/2505.20290},}
Language Reward Modulation for Pretraining Reinforcement Learning
Ademi Adeniji, Amber Xie, Carmelo Sferrazza, Younggyo Seo, and 2 more authors
In RLC Reinforcement Learning Beyond Rewards Workshop 2024 In RLC Training Agents with Foundation Models Workshop 2024
@inproceedings{adeniji2023languagerewardmodulationpretraining,title={Language Reward Modulation for Pretraining Reinforcement Learning},author={Adeniji, Ademi and Xie, Amber and Sferrazza, Carmelo and Seo, Younggyo and James, Stephen and Abbeel, Pieter},booktitle={In RLC Reinforcement Learning Beyond Rewards Workshop 2024},booktitle2={In RLC Training Agents with Foundation Models Workshop 2024},year={2024},eprint={2308.12270},archiveprefix={arXiv},primaryclass={cs.LG},url={https://arxiv.org/abs/2308.12270},}
Video Prediction Models as Rewards for Reinforcement Learning
Alejandro Escontrela*, Ademi Adeniji*, Wilson Yan*, Ajay Jain, and 5 more authors
@inproceedings{escontrela2023videopredictionmodelsrewards,title={Video Prediction Models as Rewards for Reinforcement Learning},author={Escontrela, Alejandro and Adeniji, Ademi and Yan, Wilson and Jain, Ajay and Peng, Xue Bin and Goldberg, Ken and Lee, Youngwoon and Hafner, Danijar and Abbeel, Pieter},booktitle={In NeurIPS 2023},year={2023},eprint={2305.14343},archiveprefix={arXiv},primaryclass={cs.LG},url={https://arxiv.org/abs/2305.14343},}
Skill-Based Reinforcement Learning with Intrinsic Reward Matching
Ademi Adeniji*, Amber Xie*, and Pieter Abbeel
In RLC Reinforcement Learning Beyond Rewards Workshop 2024 (Spotlight) In RLC Training Agents with Foundation Models Workshop 2024 In NeurIPS Intrinsically Motivated Open-ended Learning Workshop 2023
@thesis{adeniji2020latentactorcritic,title={Latent Actor-Critic with Intrinsic Motivation and Skill Hierarchy},author={Adeniji, Ademi and Zhang, Eva},school={Stanford University},type={Course Project},year={2020},url={https://drive.google.com/file/d/1AnTCsq9rUZF-m9AFBeZEgFjGd4DVO33I/view?usp=sharing},}
@thesis{adeniji2019latentskilltransfer,title={Latent Skill Transfer for Simulated Agents},author={Adeniji, Ademi},school={Stanford University},type={Course Project},year={2019},url={https://drive.google.com/file/d/1LZmucvSjb3209sNswl-4ZS8mTdGDe_MO/view?usp=sharing},}
Recurrent Control Nets for Deep Reinforcement Learning
Vincent Liu, Ademi Adeniji, Nate Lee, Jason Zhao, and 1 more author
@article{liu2019recurrentcontrolnetsdeep,title={Recurrent Control Nets for Deep Reinforcement Learning},author={Liu, Vincent and Adeniji, Ademi and Lee, Nate and Zhao, Jason and Srouji, Mario},journal={Stanford Undergraduate Research Journal 2019},year={2019},eprint={1901.01994},archiveprefix={arXiv},primaryclass={cs.LG},url={https://arxiv.org/abs/1901.01994},}
Volumetric Semantic Segmentation of Glioblastoma Tumors from MRI Studies
@thesis{adeniji2019volumetricsegmentationtumor,author={Adeniji, Ademi and Liu, Vincent},title={Volumetric Semantic Segmentation of Glioblastoma Tumors from MRI Studies},school={Stanford University},type={Course Project},year={2019},url={https://drive.google.com/file/d/11tmPV9PguRXKn-ZWsSea41p-Edo-U3DB/view},}
Sequence-to-Sequence Generative Argumentative Dialogue Systems with Self-Attention
@thesis{adeniji2019sequence,author={Adeniji, Ademi and Lee, Nate and Liu, Vincent},title={Sequence-to-Sequence Generative Argumentative Dialogue Systems with Self-Attention},school={Stanford University},type={CS 224N Course Project},year={2019},url={https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1194/reports/custom/15844523.pdf},}
Training Agents using Cheap Data Sources September 2024
I gave an invited talk at Princeton Reinforcement Learning Lab. I talk about how to leverage cheap action-free data to improve the generalization of reinforcement learning policies. Here are the slides.
Accelerating Reinforcement Learning with Pretrained Behaviors 2022
I gave a series of talks to Preferred Networks, Intel Reinforcement Learning Community, Sony Deep Learning Group, and EA Sports Reinforcement Learning Group. I talk about how to leverage pretrained skills to accelerate reinforcement learning. Here are the slides.
If any of my work sounds interesting, please drop me an email at ademi_adeniji@berkeley.edu!