| CARVIEW |
Loading ...
3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping
ICCV 2023
Zhuoqian Yang1,2, Shikai Li1, Wayne Wu1† , Bo Dai1Abstract
We present 3DHumanGAN, a 3D-aware generative adversarial network that synthesizes photorealistic images of full-body humans with consistent appearances under different view-angles and body-poses. To tackle the representational and computational challenges in synthesizing the articulated structure of human bodies, we propose a novel generator architecture in which a 2D convolutional backbone is modulated by a 3D pose mapping network. The 3D pose mapping network is formulated as a renderable implicit function conditioned on a posed 3D human mesh. This design has several merits: i) it leverages the strength of 2D GANs to produce high-quality images; ii) it generates consistent images under varying view-angles and poses; iii) the model can incorporate the 3D human prior and enable pose conditioning.
Video Demo
Qualitative Results
Browse through random generations.
View Consistency
The appearance of the generated humans are consistent under different poses and view angles.
Appearance Interpolation
We can also animate the scene by interpolating the deformation latent codes of two input frames. Use the slider here to linearly interpolate between the left frame and the right frame.
Pose Interpolation
BibTeX
If you find this work useful for your research, please consider citing our paper:
@inproceedings{yang20233dhumangan,
title={3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping},
author={Yang, Zhuoqian and Li, Shikai and Wu, Wayne and Dai, Bo},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={23008--23019},
year={2023}
}