| CARVIEW |
Select Language
HTTP/2 200
cache-control: max-age=3600
last-modified: Tue, 18 Oct 2022 00:07:55 GMT
content-type: text/html; charset=utf-8
server: Google Frontend
via: 1.1 google, 1.1 varnish, 1.1 varnish, 1.1 varnish
x-frame-options: SAMEORIGIN
x-cloud-trace-context: 4b549abcc4c5736a61d1fe5e191f55ed
content-security-policy: frame-ancestors 'none'
accept-ranges: bytes
age: 538259
date: Thu, 01 Jan 2026 03:11:14 GMT
x-served-by: cache-lga21938-LGA, cache-lga21936-LGA, cache-bom-vanm7210059-BOM
x-cache: MISS, HIT, MISS
x-timer: S1767237074.906213,VS0,VE252
content-length: 46653
[2205.15241] Multi-Game Decision Transformers
Skip to main content
[v1] Mon, 30 May 2022 16:55:38 UTC (1,015 KB)
[v2] Sat, 15 Oct 2022 07:31:27 UTC (1,023 KB)
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.
Donate
Computer Science > Artificial Intelligence
arXiv:2205.15241 (cs)
[Submitted on 30 May 2022 (v1), last revised 15 Oct 2022 (this version, v2)]
Title:Multi-Game Decision Transformers
Authors:Kuang-Huei Lee, Ofir Nachum, Mengjiao Yang, Lisa Lee, Daniel Freeman, Winnie Xu, Sergio Guadarrama, Ian Fischer, Eric Jang, Henryk Michalewski, Igor Mordatch
View a PDF of the paper titled Multi-Game Decision Transformers, by Kuang-Huei Lee and 10 other authors
View PDF
Abstract:A longstanding goal of the field of AI is a method for learning a highly capable, generalist agent from diverse experience. In the subfields of vision and language, this was largely achieved by scaling up transformer-based models and training them on large, diverse datasets. Motivated by this progress, we investigate whether the same strategy can be used to produce generalist reinforcement learning agents. Specifically, we show that a single transformer-based model - with a single set of weights - trained purely offline can play a suite of up to 46 Atari games simultaneously at close-to-human performance. When trained and evaluated appropriately, we find that the same trends observed in language and vision hold, including scaling of performance with model size and rapid adaptation to new games via fine-tuning. We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning, and find that our Multi-Game Decision Transformer models offer the best scalability and performance. We release the pre-trained models and code to encourage further research in this direction.
| Comments: | NeurIPS 2022. 24 pages, 16 figures. Additional information, videos and code can be seen at this https URL |
| Subjects: | Artificial Intelligence (cs.AI); Machine Learning (cs.LG) |
| Cite as: | arXiv:2205.15241 [cs.AI] |
| (or arXiv:2205.15241v2 [cs.AI] for this version) | |
| https://doi.org/10.48550/arXiv.2205.15241
arXiv-issued DOI via DataCite
|
Submission history
From: Kuang-Huei Lee [view email][v1] Mon, 30 May 2022 16:55:38 UTC (1,015 KB)
[v2] Sat, 15 Oct 2022 07:31:27 UTC (1,023 KB)
Full-text links:
Access Paper:
- View PDF
- TeX Source
View a PDF of the paper titled Multi-Game Decision Transformers, by Kuang-Huei Lee and 10 other authors
References & Citations
export BibTeX citation
Loading...
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.