| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Sat, 12 Dec 2020 19:40:38 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"5fd51cb6-2079"
expires: Sat, 27 Dec 2025 23:50:08 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: F931:2916CC:71D2F7:7F2EF6:69506E57
accept-ranges: bytes
age: 0
date: Sat, 27 Dec 2025 23:40:08 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210086-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766878808.315368,VS0,VE216
vary: Accept-Encoding
x-fastly-request-id: f70b7daf6235feaaa637f2067074c412001b1d70
content-length: 2678
Portrait Neural Radiance Fields from a Single Image
Portrait Neural Radiance Fields from a Single Image
Abstract
We present a method for estimating Neural Radiance Fields (NeRF) from a single headshot portrait. While NeRF has demonstrated high-quality view synthesis, it requires multiple images of static scenes and thus impractical for casual captures and moving subjects. In this work, we propose to pretrain the weights of a multilayer perceptron (MLP), which implicitly models the volumetric density and colors, with a meta-learning framework using a light stage portrait dataset. To improve the generalization to unseen faces, we train the MLP in the canonical coordinate space approximated by 3D face morphable models. We quantitatively evaluate the method using controlled captures and demonstrate the generalization to real portrait images, showing favorable results against state-of-the-arts.Paper
BibTex
@article{Gao-portraitnerf,
Author = {Gao, Chen and Shih, Yichang and Lai, Wei-Sheng and Liang, Chia-Kai and Huang, Jia-Bin},
Title = {Portrait Neural Radiance Fields from a Single Image},
journal = {arXiv preprint arXiv:2012.05903},
year = {2020}
}
Copyright © Chen Gao 2020

