You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Before running the demo code, please set up a depth camera and a robot, implement a camera and robot control code in utils.py, and then modify corresponding code in main.py at lines 20-22, 626, 663.
If you would like to run the demo without a robot, please use local image and comment out relevant code. (See lines 29-32)
Please set the following environment variable to use OpenAI API:
export OPENAI_API_KEY=your_openai_key
Usage
To get started with a simple demo assuming you have a depth camera setup and running, take the following steps:
If --collect_log is set, the results will be logged to wonderful_team_robotics/<task>_<run_number> and the agent response will be saved to wonderful_team_robotics/<task>_<run_number>/response_log.txt. Otherwise the results will be saved to wonderful_team_robotics/log.
Citation
If you find Wonderful Team useful in your research or applications, please consider citing it by the following BibTeX entry.
@misc{wang2024wonderfulteam,
title={Wonderful Team: Zero-Shot Physical Task Planning with Visual LLMs},
author={Zidan Wang and Rui Shen and Bradly Stadie},
year={2024},
eprint={2407.19094},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2407.19094},
}