| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Wed, 24 Dec 2025 07:04:18 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"694b9072-1848"
expires: Sat, 27 Dec 2025 23:46:17 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 9759:3157C7:70679F:7DC24A:69506D70
accept-ranges: bytes
age: 0
date: Sat, 27 Dec 2025 23:36:17 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210025-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766878577.312176,VS0,VE218
vary: Accept-Encoding
x-fastly-request-id: 884cadaa9bd814cbdac2d6f50b12140656a0fe95
content-length: 2484
Dynamic View Synthesis from Dynamic Monocular Video
Dynamic View Synthesis from Dynamic Monocular Video
Abstract
We present an algorithm for generating novel views at arbitrary viewpoints and any input time step given a monocular video of a dynamic scene. Our work builds upon recent advances in neural implicit representation and uses continuous and differentiable functions for modeling the time-varying structure and the appearance of the scene. We jointly train a time-invariant static NeRF and a time-varying dynamic NeRF, and learn how to blend the results in an unsupervised manner. However, learning this implicit function from a single video is highly ill-posed (with infinitely many solutions that match the input video). To resolve the ambiguity, we introduce regularization losses to encourage a more physically plausible solution. We show extensive quantitative and qualitative results of dynamic view synthesis from casually captured videos.Paper
BibTex
@inproceedings{Gao-ICCV-DynNeRF,
Author = {Gao, Chen and Saraf, Ayush and Kopf, Johannes and Huang, Jia-Bin},
Title = {Dynamic View Synthesis from Dynamic Monocular Video},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision},
year = {2021}
}
Copyright © Chen Gao 2021

