Code for working with the EvoGym paper's data. All datasets are hosted on huggingface.
- EvoGym/robots: 90k+ annotated robot structures
- EvoGym/robot-with-policies: 2.5k+ annotated robot structures and policies
The package installed by this repo is necessary to deserialize robot policies. Python versions 3.8
through 3.10
are supported.
conda create -n "egdatasets" python=3.8
conda activate egdatasets
pip install -e .
The EvoGym/robots dataset contains 90k+ annotated robot structures. The fields of each robot in the dataset are as follows:
uid
(str): Unique identifier for the robotbody
(int64 np.ndarray): 2D array indicating the voxels that make up the robotconnections
(int64 np.ndarray): 2D array indicating how the robot's voxels are connected. In this dataset, all robots are fully-connected, meaning that all adjacent voxels are connectedreward
(float): reward achieved by the robot's policyenv_name
(str): Name of the EvoGym environment (task) the robot was trained ongenerated_by
("Genetic Algorithm" | "Bayesian Optimization" | "CPPN-NEAT"): Algorithm used to generate the robot
Note
Please see robots.py
for more details on how to use this dataset.
The EvoGym/robots-with-policies dataset contains 2.5k+ annotated robot structures and their policies. This dataset is a subset of the EvoGym/robots dataset, with the addition of serialized policies for each robot. The fields of each robot in the dataset are as follows:
uid
(str): Unique identifier for the robotbody
(int64 np.ndarray): 2D array indicating the voxels that make up the robotconnections
(int64 np.ndarray): 2D array indicating how the robot's voxels are connected. In this dataset, all robots are fully-connected, meaning that all adjacent voxels are connected.reward
(float): reward achieved by the robot's policy [1]env_name
(str): Name of the EvoGym environment (task) the robot was trained ongenerated_by
("Genetic Algorithm" | "Bayesian Optimization" | "CPPN-NEAT"): Algorithm used to generate the robotpolicy_blob
(binary): Serialized policy for the robot
Note
Please see robots_with_policies.py
for more details on how to use this dataset, and an example of how to deserialize the policies.
[1] Rewards may not exactly match those in EvoGym/robots, due to changes in the library, system architecture, etc.
If you find these datasets useful, please consider citing our paper:
@article{bhatia2021evolution,
title={Evolution gym: A large-scale benchmark for evolving soft robots},
author={Bhatia, Jagdeep and Jackson, Holly and Tian, Yunsheng and Xu, Jie and Matusik, Wojciech},
journal={Advances in Neural Information Processing Systems},
volume={34},
year={2021}
}