| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Thu, 14 Oct 2021 01:59:32 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"61678f04-25ff"
expires: Sat, 27 Dec 2025 15:32:25 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 5B8F:36A0B4:6CAFE9:792AE9:694FF9B0
accept-ranges: bytes
age: 0
date: Sat, 27 Dec 2025 15:22:25 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210030-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766848945.950504,VS0,VE207
vary: Accept-Encoding
x-fastly-request-id: a32bd14d9cfe01b187a2e61e679d5867a15f8440
content-length: 2756
FiG-NeRF
FiG-NeRF:
International Conference on 3D Vision - 3DV, 2021
FiG-NeRF:
Figure Ground Neural Radiance Fields for
3D Object Category Modelling
1University of Washington,
2Google Research
International Conference on 3D Vision - 3DV, 2021
Abstract
We investigate the use of Neural Radiance Fields (NeRF) to learn high quality 3D object category models from collections of input images. In contrast to previous work, we are able to do this whilst simultaneously separating foreground objects from their varying backgrounds. We achieve this via a 2-component NeRF model, FiG-NeRF, that prefers explanation of the scene as a geometrically constant background and a deformable foreground that represents the object category. We show that this method can learn accurate 3D object category models using only photometric supervision and casually captured images of the objects. Additionally, our 2-part decomposition allows the model to perform accurate and crisp amodal segmentation. We quantitatively evaluate our method with view synthesis and image fidelity metrics, using synthetic, lab-captured, and in-the-wild data. Our results demonstrate convincing 3D object category modelling that exceed the performance of existing methods.
Overview Video
More Results
We show extra results and baseline comparisons on three datasets. We demonstrate both instance interpolations (shape+color, shape, and color) and viewpoint interpolation/extrapolation. Please see the paper for more details.
Cars
Glasses
Cups
BibTeX
@inproceedings{xie2021fignerf,
author = {Xie, Christopher and Park, Keunhong and Martin-Brualla, Ricardo and Brown, Matthew},
title = {FiG-NeRF: Figure-Ground Neural Radiance Fields for 3D Object Category Modelling},
booktitle = {International Conference on 3D Vision (3DV)},
year = {2021},
}