You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RL3D is a framework for visual reinforcement learning (RL) using a pretrained 3D visual representation and jointly training with an auxiliary view synthesis task. RL3D could generate novel view synthesis for diverse RL tasks and achieve good sample efficiency, robustness to sim-to-real transfer, and generalization to unseen environments.
Instructions
Assuming that you already have MuJoCo installed, install dependencies using conda:
After installing dependencies, you can train an agent by using the provided script
bash scripts/train.sh
Evaluation videos and model weights can be saved with arguments save_video=1 and save_model=1. Refer to the arguments.py for a full list of options and default hyperparameters.
The training script supports both local logging as well as cloud-based logging with Weights & Biases. To use W&B, provide a key by setting the environment variable WANDB_API_KEY=<YOUR_KEY>, set use_wandb=1, and add your W&B project and entity details in the script.
Pretrained Model
Import the encoder in our pretrained 3D visual representation