This repository is the implementation code of the NeurIPS 2025 paper: Hyper-GoalNet: Goal-Conditioned Manipulation Policy Learning with HyperNetworks.
In this repo, we provide our implementation code of training and evaluation in simulation. To simplify the process of reproducing our results, we also provide the datasets generated from MimicGen and the corresponding checkpoints for each task.
-
Create and activate conda environment
conda create -n hyperpolicy python=3.8 conda activate hyperpolicy -
Clone the repository
git clone https://github.com/wantingyao/hyper-goalnet.git -
Install dependency packages
python -m pip install pip==23.3.1 cd hyper-goalnet pip install -r requirements.txt -
We use r3m pre-trained weights for our visual encoder.
git clone https://github.com/facebookresearch/r3m.git cd r3m pip install -e . -
The simulation result of Hyper-GoalNet is tested on robosuite benchmark using datasets generated by mimicgen. Please follow the steps below to install the required dependencies.
# Install robosuite cd .. git clone https://github.com/ARISE-Initiative/robosuite.git cd robosuite git checkout b9d8d3de5e3dfd1724f4a0e6555246c460407daa pip install -e . cd .. git clone https://github.com/ARISE-Initiative/robosuite-task-zoo cd robosuite-task-zoo git checkout 74eab7f88214c21ca1ae8617c2b2f8d19718a9ed pip install -e . # Install mimicgen cd .. git clone https://github.com/NVlabs/mimicgen.git cd mimicgen pip install -e . -
We modified robosuite for our evaluation on randomly generated testing data from mimicgen. After installing robosuite, please copy the following codes to the end of the file
robosuite/robosuite/environments/base.py.def save_state_dict(self): return self.sim.get_state().flatten() def set_specified_state(self, specified = None, value = None): if(specified): self.sim.set_state_from_flattened(value) self.sim.forward() def get_obs(self): observations = ( self.viewer._get_observations(force_update=True) if self.viewer_get_obs else self._get_observations(force_update=True) ) # Return new observations return observations
The training and testing datasets used in our experiments are available for download from link. Note that these datasets were generated by mimicgen, you can also follow the mimicgen documentation to generate your own custom datasets.
Ensure your data is placed in the correct folders as specified in the project structure.
cd ../hypergoal
mkdir -p datasets/{testing_data,training_data}
.
├── hyper-goalnet/
│ ├── algo/
│ ├── configs/
│ ├── datasets/
│ │ ├── testing_data/
│ │ │ └── ${task}.hdf5 # place your testing data here
│ │ └── training_data/
│ │ └── ${task}.hdf5 # place your training data here
│ ├── models/
│ ├── scripts/
│ ├── utils/
├── mimicgen/
├── r3m/
├── robosuite/
├── robosuite-task-zoo/
Run the following script to start training. The training configuration is located at hypergoal/configs/config.yaml. The default training task is coffee_d0.
python scripts/train.py
Update hypergoal/configs/eval_config.yaml with your checkpoint path, then run the evaluation script.
python scripts/eval.py
If you have a remote X server, set the DISPLAY variable to your actual display number:
# Check current display
echo $DISPLAY
# Set display variable (replace :0 with your actual display number)
export DISPLAY=:0
You can download our trained checkpoints from Link.
Please cite Hyper-GoalNet if you find this repository helpful:
@article{zhou2025hyper,
title={Hyper-GoalNet: Goal-Conditioned Manipulation Policy Learning with HyperNetworks},
author={Zhou, Pei and Yao, Wanting and Luo, Qian and Zhou, Xunzhe and Yang, Yanchao},
journal={arXiv preprint arXiv:2512.00085},
year={2025}
}
Licensed under the MIT License.
