You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For users aiming to train Seer from scratch or fine-tune it, we provide comprehensive instructions for environment setup, downstream task data preparation, training, and deployment.
This section details the pre-training process of Seer in real-world experiments, including environment setup, dataset preparation, and training procedures. Downstream task processing and fine-tuning are covered in Real-World (Quick Training w & w/o pre-training).
Release the evaluation code of Seer-Large on CALVIN ABC-D experiment.
Release the training code of Seer-Large on CALVIN ABC-D experiment.
Release LIBERO-LONG experiment code.
Release simpleseer, a quick scratch training & deploying code.
License
All assets and code are under the Apache 2.0 license unless specified otherwise.
Citation
If you find the project helpful for your research, please consider citing our paper:
@article{tian2024predictive,
title={Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation},
author={Tian, Yang and Yang, Sizhe and Zeng, Jia and Wang, Ping and Lin, Dahua and Dong, Hao and Pang, Jiangmiao},
journal={arXiv preprint arXiv:2412.15109},
year={2024}
}
Acknowledgment
This project builds upon GR-1 and Roboflamingo. We thank these teams for their open-source contributions.
About
[ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation