| CARVIEW |
RemembeRL: what can past experience tell us about our current action?
CoRL 2025
Seoul, South Korea
September 27th, 2025
End: 4.30 PM
Robot learning traditionally relies on applications of machine learning methods via the induction paradigm, i.e. where the agent extrapolates
a policy in the form of a condensed, unified model given a set of training data.
Under this formulation, training data is therefore discarded at test time, and the agent may no longer explicitly reason on past experience to inform its current action. In other words, the agent is not allowed to remember previous situations.
However, we observe that humans are able to conveniently recall past experiences to solve decision making problems, and notice a close connection with recent trends towards in-context learning, meta-learning, and retrieval-augmented inference in the Reinforcement Learning (RL) and Imitation Learning (IL) literature.
In this workshop, we investigate how robot learning algorithms can be designed for robotic agents to explicitly retrieve, reason, attend, or leverage on past experience to quickly adapt to previously unseen situations or bootstrap their learning.
Recent advances in the field allow agents to remember by framing the problem as in-context learning and meta-learning, e.g. by incorporating memory in the form of explicit, contextual input to a policy.
Similarly, recent works propose retrieval mechanisms (retrieval-augmented policies) for agents to conveniently reason about the current action based on previous experience.
We then raise the question:
"Should robots know, or remember how to act?"
Our objective is to understand the interconnection between the fields of in-context learning, meta-learning, transductive and retrieval-augmented inference for robot learning applications.
In this context, we aim to discuss about recent contributions, challenges, opportunities and novel directions for future research.
help_outline Open questions for the community
- Should robotic agents know or remember how to act?
- How can past experience be integrated implicitly or explicitly in RL and IL algorithms?
- Does reasoning on past experience improve agent performance even for tasks that do not require memory?
- Can agents generalize their memory over unseen data and tasks? (e.g. in-context learning with OOD contexts)
- What are the interconnections between in-context learning, meta-learning, and retrieval-based approaches?
News
Recording
Video recordings will be made available here in Program section aside each talk.
Invited Speakers & Panelists
Gunhee Kim
Abhishek Gupta
Steven Morad
Chelsea Finn
Hung Le
Sergey Levine
Dhruv Shah
Animesh Garg
Program
Morning
| 9:30 - 9:40 AM |
Welcome Opening remarks |
| 9:40 - 10:10 AM |
Invited Talk 1 Gunhee Kim, Seoul National University
|
| 10:10 - 10:40 AM |
Invited Talk 2 Abhishek Gupta, University of Washington
|
| 10:40 - 11:30 AM |
Break Poster Session Coffee break + Poster Session (Morning) |
| 11:30 - 12:00 PM |
Invited Talk 3 Steven Morad, University of Macau
|
| 12:00 - 12:30 PM |
Invited Talk 4 Chelsea Finn, Stanford University
|
| 12:30 - 1:30 PM |
Break Lunch |
Afternoon
| 1:30 - 2:00 PM |
Invited Talk 5 Hung Le, Deakin University
|
| 2.00 - 2:05 PM |
Startup pitch Paperedge.ai
|
| 2:05 - 2:30 PM |
Poster Spotlights Spotlight talks 1, 2, 3 |
| 2:30 - 3:15 PM |
Break Poster Session Coffee break + Poster Session (Afternoon) |
| 3:15 - 3:45 PM |
Poster Spotlights Spotlight talks 4, 5, 6 |
| 3:45 - 4:15 PM |
Panel Discussion Sergey Levine, Dhruv Shah, Steven Morad, Animesh Garg
forum Questions for discussion:
|
| 4:20 - 4:30 PM |
Closing Closing remarks & AWARDS! 🏆 |
Organizers
Advisory Board
Call for Papers keyboard_arrow_down
We invite contributions that explore memory-based, in-context, retrieval-augmented, or transductive approaches to robot learning from diverse perspectives, including but not limited to the following topics. We particularly encourage early-stage ideas and preliminary results that can spark discussion and inspire new directions in the field.
Note: at CoRL 2025, only papers that are NOT accepted at the main conference can be accepted at the workshop.
Submission Format
- Papers must be submitted through OpenReview (Link).
- Papers are expected to be 4-8 pages (excluding references, acknowledgments, and optional appendix).
- They should be formatted using the CoRL 2025 LaTeX template (Link).
- Submissions must be anonymized.
- Authors are encouraged to submit a supplementary file containing further details for reviewers, to be submitted through OpenReview as a single zip file.
- Only papers that are NOT already accepted at the main conference can be submitted to the workshop (strict policy @ CoRL 2025).
Reviewing Process
- The reviewing process will be double-blind, single-phase (i.e., no rebuttal).
Publication
- Accepted papers will be non-archival.
- There will be no formal proceedings. Accepted papers will be published on the workshop website for future visitors unless otherwise requested by authors.
- At least one author for each accepted paper must attend the workshop in-person.
