You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
All the environments have been included in the code, so there is no need to install Multi-sensor Multi-target Coverage(MSMTC) or MPE(Cooperative Navigation) additionally.
Note that the command above will load the default environment described in the paper. If you want to change the number of agents and targets, please refer to the num-agents and num-targets arguments.
After running the above command, you can run the following command respectively to do Communication Reduction mentioned in the paper:
The above command is for cpu training. If you want to train the model on GPU, try to add --gpu-id [cuda_device_id] in the command. Note that this implementation does NOT support multi-gpu training.
Rendering
After training, you can load the trained model and render its behavior by the following command.
If you found ToM2C useful, please consider citing:
@inproceedings{
wang2021tomc,
title={ToM2C: Target-oriented Multi-agent Communication and Cooperation with Theory of Mind},
author={Yuanfei Wang and Fangwei Zhong and Jing Xu and Yizhou Wang},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=M3tw78MH1Bk}
}