| CARVIEW |
Select Language
HTTP/2 200
last-modified: Tue, 26 Sep 2023 00:24:57 GMT
cache-control: max-age=3600
content-type: text/html; charset=utf-8
content-security-policy: frame-ancestors 'none'
x-frame-options: SAMEORIGIN
x-cloud-trace-context: 5965922657a525b914ab682a99d778f9
server: Google Frontend
via: 1.1 google, 1.1 varnish, 1.1 varnish
accept-ranges: bytes
age: 202620
date: Thu, 01 Jan 2026 00:56:42 GMT
x-served-by: cache-lga21950-LGA, cache-bom-vanm7210096-BOM
x-cache: HIT, HIT
x-timer: S1767229002.948153,VS0,VE201
content-length: 50455
[2210.05178] Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials
Skip to main content
[v1] Tue, 11 Oct 2022 06:30:53 UTC (38,808 KB)
[v2] Thu, 13 Apr 2023 17:57:17 UTC (29,408 KB)
[v3] Sat, 23 Sep 2023 23:25:32 UTC (29,408 KB)
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.
Donate
Computer Science > Robotics
arXiv:2210.05178 (cs)
[Submitted on 11 Oct 2022 (v1), last revised 23 Sep 2023 (this version, v3)]
Title:Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials
Authors:Aviral Kumar, Anikait Singh, Frederik Ebert, Mitsuhiko Nakamoto, Yanlai Yang, Chelsea Finn, Sergey Levine
View a PDF of the paper titled Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials, by Aviral Kumar and 6 other authors
View PDF
Abstract:Progress in deep learning highlights the tremendous potential of utilizing diverse robotic datasets for attaining effective generalization and makes it enticing to consider leveraging broad datasets for attaining robust generalization in robotic learning as well. However, in practice, we often want to learn a new skill in a new environment that is unlikely to be contained in the prior data. Therefore we ask: how can we leverage existing diverse offline datasets in combination with small amounts of task-specific data to solve new tasks, while still enjoying the generalization benefits of training on large amounts of data? In this paper, we demonstrate that end-to-end offline RL can be an effective approach for doing this, without the need for any representation learning or vision-based pre-training. We present pre-training for robots (PTR), a framework based on offline RL that attempts to effectively learn new tasks by combining pre-training on existing robotic datasets with rapid fine-tuning on a new task, with as few as 10 demonstrations. PTR utilizes an existing offline RL method, conservative Q-learning (CQL), but extends it to include several crucial design decisions that enable PTR to actually work and outperform a variety of prior methods. To our knowledge, PTR is the first RL method that succeeds at learning new tasks in a new domain on a real WidowX robot with as few as 10 task demonstrations, by effectively leveraging an existing dataset of diverse multi-task robot data collected in a variety of toy kitchens. We also demonstrate that PTR can enable effective autonomous fine-tuning and improvement in a handful of trials, without needing any demonstrations. An accompanying overview video can be found in the supplementary material and at thi URL: this https URL
| Subjects: | Robotics (cs.RO); Machine Learning (cs.LG) |
| Cite as: | arXiv:2210.05178 [cs.RO] |
| (or arXiv:2210.05178v3 [cs.RO] for this version) | |
| https://doi.org/10.48550/arXiv.2210.05178
arXiv-issued DOI via DataCite
|
Submission history
From: Aviral Kumar [view email][v1] Tue, 11 Oct 2022 06:30:53 UTC (38,808 KB)
[v2] Thu, 13 Apr 2023 17:57:17 UTC (29,408 KB)
[v3] Sat, 23 Sep 2023 23:25:32 UTC (29,408 KB)
Full-text links:
Access Paper:
- View PDF
- TeX Source
- Other Formats
View a PDF of the paper titled Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials, by Aviral Kumar and 6 other authors
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.