You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a tensorflow implementation of the following paper: Deep 3d Portrait from a Single Image. We propose a two-step geometry learning scheme which first learn 3DMM face reconstruction from single images then learn to estimate hair and ear depth in a stereo setup.
Getting Started
System Requirements
Software: Ubuntu 16.04, CUDA 9.0
Python >= 3.5
Usage
Clone the repository
git clone https://github.com/sicxu/Deep3dPortrait.git
cd Deep3dPortrait
pip install -r requirements.txt
To check the results, see ./output subfolders which contain the results of corresponding steps.
Others
An image pre-alignment is necessary for face reconstruction. We recommend using Bulat et al.'s method to get facial landmarks (3D definition). We also need to use the masks of face, hair and ear as input to the depth estimation network. We recommend using Lin et al.'s method for semantic segmentation.
The render code is modified from tf_mesh_render. Note that the renderer we complied does not support other tensorflow versions and can only be used on Linux.
The manipulation code will not be released. If you want to make a comparison with our method, please use the results in our paper, or you can contact me(sicheng_xu@yeah.net) for more comparisons.
Citation
If you find this code helpful for your research, please cite our paper.