Code repository for the paper: MultiPhys: Multi-Person Physics-aware 3D Motion Estimation
Nicolas Ugrinovic, Boxiao Pan, Georgios Pavlakos, Despoina Paschalidou, Bokui Shen, Jordi Sanchez-Riera, Francesc Moreno-Noguer, Leonidas Guibas,
[2024/06] Demo code release!
This code was tested on Ubuntu 20.04 LTS and requires a CUDA-capable GPU.
-
First you need to clone the repository:
git clone https://github.com/nicolasugrinovic/multiphys.git cd multiphys -
Setup the conda environment, run the following command:
bash install_conda.sh
We also include the following steps for trouble-shooting.
EITHER:- Manually install the env and dependencies
conda create -n multiphys python=3.9 -y conda activate multiphys # install pytorch using pip, update with appropriate cuda drivers if necessary pip install torch==1.13.0 torchvision==0.14.0 --index-url https://download.pytorch.org/whl/cu117 # uncomment if pip installation isn't working # conda install pytorch=1.13.0 torchvision=0.14.0 pytorch-cuda=11.7 -c pytorch -c nvidia -y # install remaining requirements pip install -r requirements.txt
OR:
- Create environment
We use PyTorch 1.13.0 with CUDA 11.7. Use
env_build.yamlto speed up installation using already-solved dependencies, though it might not be compatible with your CUDA driver.conda env create -f env_build.yml conda activate multiphys
- Manually install the env and dependencies
-
Download and setup mujoco: Mujoco
wget https://github.com/deepmind/mujoco/releases/download/2.1.0/mujoco210-linux-x86_64.tar.gz tar -xzf mujoco210-linux-x86_64.tar.gz mkdir ~/.mujoco mv mujoco210 ~/.mujoco/ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mujoco210/bin
If you have any problems with this, please follow the instructions in the EmbodiedPose repo regarding MuJoCo.
-
Download the data for the demo, this includes the used models:
bash fetch_demo_data.sh
Trouble-shooting
- (optional) Our code uses EGL to render MuJoCo simulation results in a headless fashion, so you need to have EGL installed.
You MAY need to run the following or similar commands, depending on your system:
sudo apt-get install libglfw3-dev libgles2-mesa-dev
- For evaluation: to run the collision-based penetration metric found in the evaluation scripts, you need to properly install the SDF package. Please follow the instructions found here.
The data used here, including SLAHMR estimates should have
been donwloaded and placed to the correct folders by using the fetch_demo_data.sh script.
Run the demo script. You can use the following command:
EITHER, to generate several sequences:
bash run_demo.shOR, to generate one sequence:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/nvidia:/home/nugrinovic/.mujoco/mujoco210/bin;
export MUJOCO_GL='egl';
# generate sequence
# expi sequences
python run.py --cfg tcn_voxel_4_5_chi3d_multi_hum --data sample_data/expi/expi_acro1_p1_phalpBox_all_slaInit_slaCam.pkl --data_name expi --name slahmr_override_loop2 --loops_uhc 2 --filter acro1_around-the-back1_cam20Trouble-shooting
- If you have any issues when running mujoco_py for the first time while compiling, take a look at this github issue: mujoco_py issue
This will generate a video with each sample that appear in the paper and in the paper's video. Resuls are
saved in the results/scene+/tcn_voxel_4_5_chi3d_multi_hum/results folder. For each dataset this will
generate a folder with the results, following the structure:
<dataset-name>
├── slahmr_override_loop2
├── <subject-name>
├── <action-name>
├── <date>
├── 1_results_w_2d_p1.mp4
├── ...You first need to genetare the physically corrected motion for each dataset as explained
above. Results should be saved in the folder \results\scene+tcn_voxel_4_5_chi3d_multi_hum\DATASET_NAME,
for each dataset.
Then, you need to process the results to prepare them for the evaluation scripts. To do so, you need
to run the metrics/prepare_pred_results.py script, specifying the dataset name and the experiment name,
for example:
python metrics/prepare_pred_results.py --data_name chi3d --exp_name slahmr_override_loop2This will generate .pkl files with the names of the subjects, for example s02.pkl for CHI3D under the
experiment folder. In the case, you want to run the penetration metric with SDF, you need to
generate a file that saves the vertices for each sequence. To do this, you need to add the --save_verts option,
thus, run the following command.
python metrics/prepare_pred_results.py --data_name chi3d --exp_name slahmr_override_loop2 --save_verts=1This will generate _verts.pkl files with the names of the subjects, for example s02_verts.pkl for CHI3D under the
experiment folder. These files containing vertices are necesary to compute the SDF penetration metric.
You need to either generate both the .pkl and _verts.pkl also for each baseline you want to measure (EmbPose-mp,
SLAHMR) or you can download the pre-processed results from here.
For evaluation, use the script metrics/compute_metrics_all.py. This generate
the metrics for each specified dataset and each type of metric, (i.e., pose, physics-based, and
penetration (sdf)).
Please note that for running the penetration metric based on SDF, you need to install
the sdf library. Follow the instructions found here.
To run the evaluation for a given dataset, e.g. CHI3D run the following commands. Please make sure to change all the paths in the scripts to point to your own folders:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/nvidia:/home/nugrinovic/.mujoco/mujoco210/bin;
# pose metrics
python metrics/compute_metrics_all.py --data_name chi3d --metric_type pose_mp
# physics-based metrics
python metrics/compute_metrics_all.py --data_name chi3d --metric_type phys
# inter-person penetration with SDF
python metrics/compute_metrics_all.py --data_name chi3d --metric_type sdf
You can choose any of the 3 datasets: ['chi3d', 'hi4d', 'expi'].
NOTE: the metrics/compute_metrics_all.py script is meant to compute the
results table from the paper for all experiments and all subjects for each
dataset, so in order to generate an output file, you need to generate results
for all subjects in the dataset you choose.
To generate data in the ./sample_data directory, you need to do the following:
- Add two scripts into the SLAHMR repo:
third_party/slahmr/run_opt_world.pyandthird_party/slahmr/run_vis_world.pyand then run the commands placed in./scriptsfor each subject in each dataset, e.g.:
bash scripts/camera_world/run_opt_world_chi3d.sh chi3d/train/s02 0 chi3d
bash scripts/camera_world/run_opt_world_chi3d.sh chi3d/train/s03 0 chi3d
bash scripts/camera_world/run_opt_world_chi3d.sh chi3d/train/s04 0 chi3dNote: you need to change the root variable inside these scripts to point to your own SLAHMR repo directory.
This will generate {seq_name}_scene_dict.pkl files in the SLAHMR output folder
which is then read by MultiPhys.
If the previous scripts does not work for you, please just run the following command for
each video, making sure that you change the data.root and data.seq arguments accordingly:
python run_opt_world.py data=chi3d run_opt=False run_vis=True data.root=$root/videos/chi3d/train/$seq_num data.seq="${video}" data.seq_id=$seq_num- Run the commands from
data_preproc.shfor each dataset. This will generate the files directly to thesample_datafolder. - Finally you can run the demo code on your processed data as explained above.
NOTE: please replace the paths with your own paths in the code.
- Demo/inference code
- Data pre-processing code
- Evaluation
Parts of the code are taken or adapted from the following amazing repos:
If you find this code useful for your research, please consider citing the following paper:
@inproceedings{ugrinovic2024multiphys,
author={Ugrinovic, Nicolas and Pan, Boxiao and Pavlakos, Georgios and Paschalidou, Despoina and Shen, Bokui and Sanchez-Riera, Jordi and Moreno-Noguer, Francesc and Guibas, Leonidas},
title={MultiPhys: Multi-Person Physics-aware 3D Motion Estimation},
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}