You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Existing 3D datasets are poor in diversity of age and multi-person scenories. In contrast, RH contains richer subjects with explicit age annotations in the wild. We hope that RH can promote relative research, such as monocular depth reasoning, baby / child pose estimation, and so on.
To run the demo code, please download the data and set the dataset_dir in demo code.
To use it for training, please refer to BEV for details.
Re-implementation
To re-implement RH results (in Tab. 1 of BEV paper), please first download the predictions from here, then
cd Relative_Human/
# BEV / ROMP / CRMH : set the path of downloaded results (.npz) in RH_evaluation/evaluation.py, then run
python -m RH_evaluation.evaluation
cd RH_evaluation/
# 3DMPPE: set the paths in eval_3DMPPE_RH_results.py and then run
python eval_3DMPPE_RH_results.py
# SMAP: set the paths in eval_SMAP_RH_results.py and then run
python eval_SMAP_RH_results.py
Citation
Please cite our paper if you use RH in your research.
@InProceedings{sun2022BEV,
author = {Sun, Yu and Liu, Wu and Bao, Qian and Fu, Yili and Mei, Tao and Black, Michael J},
title = {Putting People in their Place: Monocular Regression of {3D} People in Depth},
booktitle = {IEEE/CVF Conf.~on Computer Vision and Pattern Recognition (CVPR)},
year = {2022}
}