You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, get into the imitation/src/imitation/experiments folder:
cd imitation/src/imitation/experiments
Then generate the task definitions for creating gym environments of the Watch&Move tasks:
bash generate_tasks.sh
Download the expert demos from here and unzip the file in the imitation/src/imitation/output folder.
To run training and evaluation for each task, you may use the bash scripts in the imitation/src/imitation/experiment folder. For example, to run training for task 5, you may run the following commands.
cd imitation/src/imitation/experiment
bash task5.sh
The results will be saved in tbe imitation/src/imitation/output/GEM folder.
Cite
If you use this code in your research, please cite the following papers.
@inproceedings{netanyahu2022discoverying,
title={Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning},
author={Netanyahu, Aviv and Shu, Tianmin and Tenenbaum, Joshua B and Agrawal, Pulkit},
booktitle={39th International Conference on Machine Learning (ICML)},
year={2022}
}
@misc{wang2020imitation,
author = {Wang, Steven and Toyer, Sam and Gleave, Adam and Emmons, Scott},
title = {The {\tt imitation} Library for Imitation Learning and Inverse Reinforcement Learning},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/HumanCompatibleAI/imitation}},
}
About
Code for ICML 2022 paper "Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning"