You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pytorch (<=1.4.0, some compatibility issues may occur in higher versions of pytorch)
tqdm
opencv-python
scikit-image
openmesh
for building evaluation data
pybind11,we recommend "pip install pybind11[global]" for installation.
gcc
cmake
Run the following code to install all pip packages:
pip install -r requirements.txt
Building Evaluation Data
Preliminary
Run the following script to compile & generate the relevant python module, which is used to render left/right color/depth/mask images from the textured/colored mesh.
These samples are from renderpeople and BUFF dataset.
Note: the mesh used for rendering needs to be located in a specific bounding box.
Inference
Preliminary
Run the following script to compile & generate deformable convolution from AANet.
cd AANetPlusFeature/deform_conv
bash build.sh
cd ../..
Download the trained model and mv to the "Models" folder.
Generate evalution data with aboved "Building Evaluation Data", or capture real data by ZED Camera (we test on ZED camera v1).
Note: rectifying left/right images is required before using ZED camera.
Demo
bash eval.sh
The reconsturction result will be saved to "Results" folder.
Note: At least 10GB GPU memory is recommended to run StereoPIFu model.
Citation
@inproceedings{yang2021stereopifu,
author = {Yang Hong and Juyong Zhang and Boyi Jiang and Yudong Guo and Ligang Liu and Hujun Bao},
title = {StereoPIFu: Depth Aware Clothed Human Digitization via Stereo Vision},
booktitle = {{IEEE/CVF} Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2021}
}