| CARVIEW |
Select Language
HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://augmentedperception.github.io/deepview/
x-github-request-id: C346:444BC:744BDB:8233C3:6950B71D
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 04:50:37 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210045-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766897438.665681,VS0,VE198
vary: Accept-Encoding
x-fastly-request-id: d7ed1def9fbf50d8f548e323e4b65aa2b1ec437a
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Wed, 29 Apr 2020 03:26:48 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"5ea8f3f8-300f"
expires: Sun, 28 Dec 2025 05:00:37 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: B3E5:36A0B4:73F44E:81DC5A:6950B71D
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 04:50:38 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210045-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766897438.882666,VS0,VE207
vary: Accept-Encoding
x-fastly-request-id: 71870fab5d268cd280bd455d546568795432b483
content-length: 3187
DeepView: View synthesis with learned gradient descent
DeepView
Video with synthesized fly-throughs and depth visualizations of the scenes shown in the paper.
Note that these were made with a 16-view version of the model in the paper with a sparsity penalty to reduce unneeded content on occluded layers.
The Chrome browser is recommended.
View a comparison on test scenes for 4-view (large baseline) case.
View a comparison on test scenes from Kalantari et al.
DeepView
View Synthesis with Learned Gradient Descent
John Flynn
jflynn@google.comMichael Broxton
broxton@google.comPaul Debevec
debevec@google.comMatthew DuVall
matthewduvall@google.comGraham Fyffe
fyffe@google.comRyan Overbeck
rover@google.comNoah Snavely
snavely@google.comRichard Tucker
richardt@google.comGoogle Inc.
Technical Video
Video with synthesized fly-throughs and depth visualizations of the scenes shown in the paper.
Example MPIs in our interactive viewer
We present several scenes in an interactive viewer.Note that these were made with a 16-view version of the model in the paper with a sparsity penalty to reduce unneeded content on occluded layers.
The Chrome browser is recommended.
Here are brief instructions for using the viewer.
Comparison with Soft3D and Zhou et al. on Spaces dataset
View a comparison on test scenes for 4-view (large baseline) case.
Comparison with Soft3D on Kalantari et al. dataset
View a comparison on test scenes from Kalantari et al.
Other Studies
Click to see results from ablation of gradient components, or from varying the number of LGD iterations.Extended Training Details
Click to see details of the methods used to reduce RAM during training and inference, as well as the training and loss hyperparameters.Spaces training data
Click here to access the Spaces dataset used to train DeepView and a script to compute the evaluation in the paper.VR@50 2018 Light Fields
At SIGGRAPH 2018 in Vancouver, Prof. Henry Fuchs invited us to record panoramic light field stills of the "VR@50" panels featuring Virtual Reality pioneer Ivan Sutherland and his colleagues. Here you can find the light field data files which can be viewed in 6-degrees-of-freedom VR using the free Welcome to Light Fields app available on Steam VR. First, here are 4K panoramas rendered from the center position of the light fields of both the afternoon and the evening events:
Installing the light fields will allow you to step inside these scenes in Virtual Reality and move your viewpoint in any direction in about a 60 centimeter diameter volume. Download the additional VR@50 light fields and installation instructions here. Enjoy!