You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@article{Ko2023Learning,
title={{Learning to Act from Actionless Videos through Dense Correspondences}},
author={Ko, Po-Chen and Mao, Jiayuan and Du, Yilun and Sun, Shao-Hua and Tenenbaum, Joshua B},
journal={arXiv:2310.08576},
year={2023},
}
Getting started
We recommend to create a new environment with pytorch installed using conda.
To run the full AVDC on Meta-World, run the following command:
# make sure you have the checkpoint ../ckpts/metaworld/model-24.pt
bash benchmark_mw.sh 0
# the argument 0 is the GPU id, you can change it to other GPU id if you wish
We have provided also provided another checkpoint trained with simple random-shift data augmentation. Specifically we first center cropped the image to 160x160 from the original 320x240 image and then random-crop an 128x128 image from it. We found slightly improved performance with this simple augmentation.
To run the full AVDC on Meta-World with this checkpoint, run the following command:
# make sure you have the checkpoint ../ckpts/metaworld_DA/model-24.pt
bash benchmark_mw_DA.sh 0
iTHOR
To run the full AVDC on iTHOR, run the following command:
# make sure you have the checkpoint ../ckpts/ithor/model-16.pt
bash benchmark_thor.sh 0