You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the case of training the image-based policy, use the config train_tedi_unet_hybrid_workspace.yaml instead.
Multi-seed training
We use the same yaml-files for the multi-seed training. On a SLURM-cluster, run:
ray start --head --num-gpus=3 --port=50004
python ray_train_multirun.py --config-dir=diffusion_policy/config --config-name=train_tedi_unet_lowdim_workspace.yaml --seeds=42,43,44
Se the Diffusion Policy repo for more details. In the paper, we report max/test_mean_score as "Max" and k_min_train_loss/test_mean_score as "Avg".
Repoducing the results
We obtained all results in the paper by multi-seed training. The results for Consistency Policy are obtained by running multi-seed training with the Consistency Policy repo.
For the image-based Push-T task, we trained for 1000 epochs, and for image-based tasks in Robomimic we trained for 1500 epochs for Lift, Can, and Square, and 500 epochs for Transport and Tool-hang.
If you find our work useful, please consider citing our paper:
@misc{høeg2024streamingdiffusionpolicyfast,
title={Streaming Diffusion Policy: Fast Policy Synthesis with Variable Noise Diffusion Models},
author={Sigmund H. Høeg and Yilun Du and Olav Egeland},
year={2024},
eprint={2406.04806},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2406.04806},
}