phospho (or phosphobot) is a software that lets you control robots, record data, train and use VLA (vision language action models).
- 🕹️ Control your robots to record datasets in minutes with a keyboard, a gamepad, a leader arm, and more
- ⚡ Train Action models such as ACT, π0 or gr00t-n1.5 in one click
- 🦾 Compatible with the SO-100, SO-101, Unitree Go2, Agilex Piper...
- 🚪 Dev-friendly API
- 🤗 Fully compatible with LeRobot and HuggingFace
- 🖥️ Runs on macOS, Linux and Windows
- 🥽 Meta Quest app for teleoperation
- 📸 Supports most cameras (classic, depth, stereo)
- 🔌 Open Source: Extend it with your own robots and cameras
Purchase your phospho starter pack at robots.phospho.ai, or use one of the supported robots:
- SO-100
- SO-101
- Koch v1.1 (beta)
- WX-250 by Trossen Robotics (beta)
- AgileX Piper (Linux-only, beta)
- Unitree Go2 Air, Pro, Edu (beta)
- LeCabot (beta)
See this README for more details on how to add support for a new robot.
Install phosphobot for your OS using the one liners available here.
Go to the webapp at YOUR_SERVER_ADDRESS:YOUR_SERVER_PORT
(default is localhost:80
) and click control.
You'll be able to control your robot with:
- a keyboard
- a gamepad
- a leader arm
- a Meta Quest
Note: port 80 might already be in use, if that's the case, the server will spin up on localhost:8020
Record a 50 episodes dataset of the task you want the robot to learn.
Check out the docs for more details.
To train an action model on the dataset you recorded, you can:
- train a model directly from the phosphobot webapp (see this tutorial)
- use your own machine (see this tutorial to finetune gr00t n1)
In both cases, you will have a trained model exported to Huggingface.
To learn more about training action models for robotics, check out the docs.
Now that you have a trained model hosted on huggingface, you can use it to control your robot either:
- directly from the webapp
- from your own code using the phosphobot python package (see this script for an example)
Learn more in the docs.
Congrats! You just trained and used your first action model on a real robot.
You can directly call the phosphobot server from your own code, using the API.
Go to the interactive docs of the API to use it interactively and learn more.
It is available at YOUR_SERVER_ADDRESS:YOUR_SERVER_PORT/docs
(default: localhost:80/docs
)
We release new versions very often, so make sure to check the API docs for the latest features and changes.
Connect with other developers and share your experience in our Discord community
git clone https://github.com/phospho-app/phosphobot.git
- On MacOS and Windows, to build the frontend and start the backend, run:
make
On Windows, the Makefile don't work. You can run the commands directly.
cd ./dashboard && (npm i && npm run build && mkdir -p ../phosphobot/resources/dist/ && cp -r ./dist/* ../phosphobot/resources/dist/)
cd phosphobot && uv run --python 3.10 phosphobot run --simulation=headless
- Go to
localhost:80
orlocalhost:8020
in your browser to see the dashboard. Go tolocalhost:80/docs
to see API docs.
We welcome contributions! Read our contribution guide. Also checkout our bounty program.
Here are some of the ways you can contribute:
- Add support for new AI models
- Add support for new teleoperation controllers
- Add support for new robots and sensors
- Add something you built to the examples
- Improve the dataset collection and manipulation
- Improve the documentation and tutorials
- Improve code quality and refacto
- Improve the performance of the app
- Fix issues you faced
- Documentation: Read the documentation
- Community Support: Join our Discord server
- Issues: Submit problems or suggestions through GitHub Issues
MIT License
Made with 💚 by the Phospho community