You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Exploitation-Guided Exploration for Semantic Embodied Navigation
Broadly speaking this work is concerned with using a geometric policy that is very good at solving some portion of a task to guide exploration of a neural policy.
Setting up Anaconda
# If you haven't already git cloned then - # git clone git@github.com:Jbwasse2/XGX.git# Create anaconda environment
conda create --name xgx python=3.7
# Install Habitat-Sim and Scikit-fmm
conda install habitat-sim=0.2.1 withbullet headless -c conda-forge -c aihabitat
conda install -c conda-forge scikit-fmm=2019.1.30
# Install Habitat-Labcd habitat-lab
pip install -r requirements.txt
python setup.py develop --all # install habitat and habitat_baselinescd ..
# Install your torch with your version of cuda, we use cu113# See https://pytorch.org/get-started/previous-versions/ for commands# if your version of cuda does not match
pip install torch==1.10.0+cu113 torchvision==0.11.0+cu113 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
Setting up data and models
HM3D Dataset
We utilzie the HM3D-V1 dataset. Details of this dataset can be found here. This dataset should be placed into ./data/scene_dataset/ yielding for example data/scene_datasets/hm3d/val/00877-4ok3usBNeis/4ok3usBNeis.basis.glb
We also utilize the standard HM3D-V1 train/val splits. This can be found here. These splits should be placed into ./data/datasets/objectnav/ yielding as an example data/datasets/objectnav/hm3d/v1/val/content/4ok3usBNeis.json.gz.
Models
We retrain RedNet to the HM3D dataset. We also used XGX to retrain a CNN+RNN model.
After running this we recorded the following results
Average episode reward: 0.7275
Average episode distance_to_goal: 2.4974
Average episode success: 0.7275
Average episode spl: 0.3613
Average episode softspl: 0.3906
Average episode sparse_reward: 1.8188
Average episode num_steps: 160.5845