You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The walking and taekwondo datasets can be downloaded from here.
Apply a pre-trained model to render demo videos
We provide our pretrained models which can be found under the outputs folder.
We provide some example scripts under the demo folder.
To run our demo scripts, you need to first downloaded the corresponding dataset, and put them under the folder specified by DATASETS -> TRAIN in configs/config_taekwondo.yml and configs/config_walking.yml
For the walking sequence, you can render videos where some performers are hided by typing the command:
If you use this code for your research, please cite our papers.
@article{zhang2021editable,
title={Editable free-viewpoint video using a layered neural representation},
author={Zhang, Jiakai and Liu, Xinhang and Ye, Xinyi and Zhao, Fuqiang and Zhang, Yanshun and Wu, Minye and Zhang, Yingliang and Xu, Lan and Yu, Jingyi},
journal={ACM Transactions on Graphics (TOG)},
volume={40},
number={4},
pages={1--18},
year={2021},
publisher={ACM New York, NY, USA}
}
About
PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.