| CARVIEW |
Overview
~Ray Kurzweil, The Age of Spiritual Machines, 1999
Recent advances in robot capabilities, fueled by data-hungry learning algorithms and large-scale foundational models, are undeniably exciting. However, as we marvel at these advancements, a critical question arises: is the current trajectory of "scaling data and compute is all you need" sustainable or even desirable for robotics? Myopically committing to this path risks creating systems that are uneconomical and ill-suited for the various constraints imposed by physical embodiments, thereby limiting widespread adoption. Instead, just as evolution nurtured the emergence of efficient brains or how mature engineering disciplines meticulously navigate resource-performance trade-offs, it is imperative that robot learning as a field embeds resource consideration at every stage of the robot learning deployment lifecycle -- from training and inference to continuous improvement and adaptation. This workshop will explore why designing for resource efficiency is beneficial, which resources should be considered for learning, and how they can be leveraged pragmatically to realize economical robot deployment.
Towards this end, we seek to advance robot learning towards resource rationality: careful deliberation on how the robot should judiciously use resources (such as computation, time, and human assistance) based on their cost-performance tradeoff through all phases of learning. Through this lens, the benefits of resource-rational learning can be explored across a diversity of perspectives, such as the importance of models for principled, sample-efficient learning and inference; effectiveness of simulated data to complement costly real-world samples; leveraging human instructions, feedback, and priors to ground what is important to learn; understanding how much sensing and information is needed for a task; and the implications of resource rationality for today's foundation models.
Resource rationality has broad impact for not only many different problems within robot learning, but more widely, to adjacent fields related to embodied intelligence, such as planning, cognitive science (understanding how human learning can be achieved given biological computational limitations), causality (understanding the causal relevance of data to decide how training resources should be expended), and systems design and engineering (how to build systems at the Pareto frontier of performance and resource utilization). Therefore, our intended audience includes the robot learning, planning, machine learning, cognitive science, causality, and robotic systems design communities, and we will select speakers and panelists who have expertise within these areas.
Discussion and Structure
Our workshop aims to highlight the diversity of perspectives that compose resource-rational robot learning across the robot learning deployment lifecycle -- training, inference, and continuous improvement and adaptation -- through discussion-rich research questions. These research questions will be embedded throughout the workshop structure, from keynote and contributed talks to our closing debate and reflections on resource rationality.
- Training: What is the role of simulation and sim-to-real transfer to help reduce the need for expensive real-world interaction?
- Training: How can a robot efficiently learn from diverse forms of human input, such as instructions, demonstrations, feedback, and interventions?
- Training: Should data gathering remain a passive process directed solely by humans, or can robots actively shape their learning trajectory?
- Training: How can we further improve the efficiency and utility of model-based learning?
- Inference: How to effectively characterize and accommodate physical constraints on the deployed policy during training in our current robot learning pipelines?
- Inference: Just as engineers meticulously plan construction sites or space missions, can we establish which robot (hardware + software stack) achieves various task objectives with minimal resources without requiring expensive trial-and-error iterations?
- Inference: Can we go beyond the sequential-decision making framing and fully leverage modern asynchronous and parallel computation capabilities for efficient inference?
- Continuous Improvement: What algorithmic advances should we make to adapt a robot's behavior in a sample-efficient manner? Can the process of fine-tuning/adaptation be made significantly cheaper than continued training on large swaths of data?
- Continuous Improvement: How to minimize the cognitive load on humans to help a robot improve?
- Continuous Improvement: When and what aspects of its internal state should a robot communicate to humans in order to maximize learning new concepts and tasks efficiently?
Workshop Program
Our workshop will feature a diversity of programming elements to explore resource rationality in robot learning. Specifically, our workshop features keynote talks, contributed talks, a poster session, all-day audience engagement featuring an in-workshop guided survey, a debate to critically analyze whether resource rationality should be a priority for the robot learning community, and culminating reflections on resource rationality.
Confirmed Speakers and Panelists
Schedule
| Session 1: Imitating and Learning from Humans | |
| 9:25: | Introduction |
| 9:30: | Yuke Zhu: "Data-Efficient Imitation Learning" |
| 10:00: | Erdem Bıyık: "Maximally Informative, Minimally Demanding: Learning from Human Feedback" |
| 10:30: | Coffee break |
| Session 2: Human-Centered Rationality | |
| 11:00: | Karinne Ramirez-Amaro: "Combining Interpretable and Explainable Methods in Robot Decision-Making" |
| 11:30: | Yukie Nagai: "From Child Development to Resource-Rational Robot Learning" |
| 12:00: | Tianmin Shu: "Scaling Model-based Mental Reasoning for Proactive and Efficient Human-Robot Collaboration" |
| 12:30: | Lunch |
| Session 3: Large Models and Contributed Papers | |
| 13:30: | Benjamin Burchfiel: "Rationally-Large Behavior Models" |
| 14:00: | Best paper talks |
| 14:15: | Lightning talks |
| 14:30: | Poster session |
| 15:00: | Coffee break |
| Session 4: Reflections on Resource Rationality | |
| 15:30: | Marc Toussaint: "On Resource-Rational Reasoning and Diversity" |
| 16:00: | Panel discussion and debate |
| 16:45: | Concluding remarks |
Accepted Papers
- Mini Diffuser: Resource-Rational Multi-task Diffusion Policy Training (best
paper)
Yutong Hu, Kehan Wen, Pinhao Song, Renaud Detry - RAMBO: RL-Augmented Model-Based Whole-Body Control for Loco-Manipulation (best paper)
Jin Cheng, Dongho Kang, Gabriele Fadini, Guanya Shi, Stelian Coros - Touch begins where vision ends: Generalizable policies for contact-rich manipulation (best paper)
Zifan Zhao, Siddhant Haldar, Jinda Cui, Lerrel Pinto, Raunaq Bhirangi - CAIMAN: Causal Action Influence Detection for Sample-efficient Loco-manipulation (best paper runner up)
Yuanchen Yuan, Jin Cheng, Núria Armengol Urpí, Stelian Coros - Action Reasoning Models that can Reason in Space (best paper
runner up)
Jason Lee, Jiafei Duan, Haoquan Fang, Yuquan Deng, Boyang Li, Shou Liu, Bohan Fang, Jieyu Zhang, Yi Ru Wang, Sangho Lee, Winson Han, Wilbert Pumacay, Angelica Wu, Rose Hendrix, Karen Farley, Eli VanderBilt, Ali Farhadi, Dieter Fox, Ranjay Krishna - Model Predictive Adversarial Imitation Learning for Planning from Observation (best paper runner up)
Tyler Han, Yanda Bao, Bhaumik Mehta, Gabriel Guo, Anubhav Vishwakarma, Emily Kang, Sanghun Jung, Rosario Scalise, Jason Liren Zhou, Bryan Xu, Byron Boots - Adaptive Diffusion Constrained Sampling for Bimanual Robot Manipulation
Haolei Tong, Yuezhe Zhang, Sophie Lueth, Georgia Chalvatzaki - AMPED: Adaptive Multi-objective Projection for balancing Exploration and skill
Diversification
Jaegyun Im, Geonwoo Cho, Jaemoon Lee, Sundong Kim - Learning More With Less: Sample-Efficient Model-Based RL for Loco-Manipulation
Benjamin Hoffman, Jin Cheng, Chenhao Li, Stelian Coros - TRACED: Transition-aware Regret Approximation with Co-learnability for Environment
Design
Geonwoo Cho, Hojun Yi, Sundong Kim - Group Policy Gradient
Junhua Chen, Zixi Zhang, Hantao Zhong, Rika Antonova - From Simulation to Reality: Data-Efficient Evaluation of Causal Bayesian Networks
Zhitao Liang, Maximilian Diehl, Nanami Hashimoto, Anne Köpken, Daniel Leidner, Karinne Ramirez-Amaro, Emmanuel Dean - SPARQ: Selective Progress-Aware Resource Querying
Anujith Muraleedharan, Anamika J H - NeRF-Aug: Data Augmentation for Robotics with Neural Radiance Fields
Eric Zhu, Mara Levy, Matthew Gwilliam, Abhinav Shrivastava - Learning on the Fly: Rapid Policy Adaptation via Differentiable Simulation
Jiahe Pan, Jiaxu Xing, Rudolf Reiter, Yifan Zhai, Elie Aljalbout, Davide Scaramuzza - RoboSSM: Scalable In-context Imitation Learning via State-Space Models
Youngju Yoo, Jiaheng Hu, Yifeng Zhu, Bo Liu, Qiang Liu, Peter Stone
Call For Proposals
We invite contributions that tackle resource rationality for robot learning from diverse perspectives, including but not limited to the following topics. We particularly encourage early-stage ideas, preliminary results, or in-progress work that can spark discussion and inspire new directions. Unfortunately, we are not able to accept submissions that have already been accepted to the main CoRL 2025 conference.
- Cognitive Architecture
- Human-Robot Interaction (HRI)
- Intuitive Psychology (Social Learning and Theory of Mind)
- Learning and Planning for Tasks
- Learning and Planning for Control
- Reinforcement Learning
- Sim2Real, Generative AI
- Active Learning
- Intuitive Physics
- Multi-Agent Collaboration
- Lifelong Learning
- Meta-Reasoning, Meta-Cognition, and Meta-Structure
- Causality, Causal Reasoning, and Causal Learning
- Efficient Adaptation of Large Models
- Efficient Inference of Large Models
- Program-Guided Learning
- Active Perception
- Multi-Objective Optimization
- Robotic Systems Design
- Co-Design of Hardware and Controllers
- Human-in-the-Loop Planning and Execution
Submission format
Papers should be submitted through OpenReview. Papers may be up to 8 pages, and should be formatted using the CORL 2025 LaTex template. Acknowledgments, References, and Appendix (optional) will not count towards the page limit, and submissions must be anonymized. Authors are encouraged to submit a supplementary file containing further details for reviewers, to be submitted through OpenReview as a single zip file.
Reviewing process
The reviewing process will be double-blind, single-phase (i.e., no rebuttal).
Publication
The paper accepted to the workshop will be non-archival — there will be no formal proceedings. At least one author for each accepted paper must attend the workshop in-person.
Important Dates
Submission Deadline: Aug 15, 2025, AoE Aug 18, 2025, AoE
Author Notification: Sept 5, 2025, AoE
Camera Ready Deadline: Sept 22, 2025, AoE
Workshop Date: Sept 27, 2025
Reviewers
We'd like to thank all the reviewers for their time and effort in helping us with the review process!
- Chen Li
- Edward S. Hu
- Patrick Callaghan
- Rishav Rishav
- Yorai Shaoul
- Tianyu Li
- Arijit Dasgupta
- Kin Man Lee
- Zifan Xu
- Jiaxun Cui
- Itamar Mishani
- Zulfiqar Zaidi
- Dai-Jie Wu
- Varshith Sreeramdass
- Vincent Pacelli
- Viraj Parimi
- Anurag Maurya
- Shuo Cheng
- Jiaheng Hu
- Ricardo Cannizzaro
- Sudarshan Sunil Harithas

















