| CARVIEW |
Select Language
HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://ubc-vision.github.io/lolnerf/
x-github-request-id: 4E9E:15317B:71B46D:7F3533:6950813A
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 01:00:47 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210076-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766883647.956720,VS0,VE197
vary: Accept-Encoding
x-fastly-request-id: 55b45cd224ba3d4852abe922945edfa43c9db3eb
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
x-origin-cache: HIT
last-modified: Sat, 29 Apr 2023 17:44:03 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"644d5763-1fff"
expires: Sun, 28 Dec 2025 01:10:47 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: D1C6:234FE9:7287B0:8008E9:6950813E
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 01:00:47 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210076-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766883647.172674,VS0,VE199
vary: Accept-Encoding
x-fastly-request-id: c5aea2dccaf3e7d1b889b3feabdc222a225a85d4
content-length: 1964
LOLNeRF: Learn from One Look
1University of British Columbia 2University of Toronto 3Google Research
š¤£LOLNeRF: Learn from One Look
CVPR 2022
Daniel Rebain1,3 Mark Matthews3 Kwang Moo Yi1 Dmitry Lagun3 Andrea Tagliasacchi2,31University of British Columbia 2University of Toronto 3Google Research
Novel Views - CelebA-HQ
Novel Views - FFHQ
Novel Views - AFHQ
Novel Views - SRN Cars
Latent Interpolation - FFHQ
Abstract
We present a method for learning a generative 3D model based on neural radiance fields, trained solely from data with only single views of each object. While generating realistic images is no longer a difficult task, producing the corresponding 3D structure such that they can be rendered from different views is non-trivial. We show that, unlike existing methods, one does not need multi-view data to achieve this goal. Specifically, we show that by reconstructing many images aligned to an approximate canonical pose with a single network conditioned on a shared latent space, you can learn a space of radiance fields that models shape and appearance for a class of objects. We demonstrate this by training models to reconstruct object categories using datasets that contain only one view of each subject without depth or geometry information. Our experiments show that we achieve state-of-the-art results in novel view synthesis and competitive results for monocular depth prediction.Citation
@misc{rebain2022lolnerf,
  title={LOLNeRF: Learn from One Look},
  author={Daniel Rebain and Mark Matthews and Kwang Moo Yi and Dmitry Lagun and Andrea Tagliasacchi},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={1558--1567},
  year={2022}
}
  title={LOLNeRF: Learn from One Look},
  author={Daniel Rebain and Mark Matthews and Kwang Moo Yi and Dmitry Lagun and Andrea Tagliasacchi},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={1558--1567},
  year={2022}
}