| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Fri, 14 Jun 2024 02:54:25 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"666bb0e1-39ea"
expires: Sun, 28 Dec 2025 03:15:29 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 5710:234FE9:73A5FE:815F8F:69509E79
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 03:05:29 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210088-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766891130.796428,VS0,VE202
vary: Accept-Encoding
x-fastly-request-id: 2e1d61ce1bb4933d6930afd62d259b95d87e2188
content-length: 3355
Modeling Ambient Scene Dynamics for Free-view Synthesis
Modeling Ambient Scene Dynamics for Free-view Synthesis
Meng-Li Shih1
Jia-Bin Huang2,3
Changil Kim3
Rajvi Shah3
Johannes Kopf3
Chen Gao3
1University of Washington 2University of Maryland, College Park 3Meta
SIGGRAPH 2024
Abstract
We introduce a novel method for dynamic free-view synthesis of an ambient scenes from a monocular capture bringing a immersive quality to the viewing experience. Our method builds upon the recent advancements in 3D Gaussian Splatting (3DGS) that can faithfully reconstruct complex static scenes. Previous attempts to extend 3DGS to represent dynamics have been confined to bounded scenes or require multi-camera captures, and often fail to generalize to unseen motions, limiting their practical application. Our approach overcomes these constraints by leveraging the periodicity of ambient motions to learn the motion trajectory model, coupled with careful regularization. We also propose important practical strategies to improve the visual quality of the baseline 3DGS static reconstructions and to improve memory efficiency critical for GPU-memory intensive learning. We demonstrate high-quality photorealistic novel view synthesis of several ambient natural scenes with intricate textures and fine structural elements.Method Overview
Comparisons
Novel View-Time Synthesis (Freeze Time & Change View)
RoDynRF [Liu et al. 2023]
4D-GS [Wu et al. 2023]
Ours
❮
❯
Novel View-Time Synthesis (Freeze View & Change Time)
RoDynRF [Liu et al. 2023]
4D-GS [Wu et al. 2023]
Ours
❮
❯
BibTex
@inproceedings{ShihAmbGaus24,
author = {Meng-Li Shih, Jia-Bin Huang, Changil Kim, Rajvi Shah, Johannes Kopf, Chen Gao},
title = {Modeling Ambient Scene Dynamics for Free-view Synthesis},
booktitle = {ACM SIGGRAPH},
year = {2024}
}
Copyright © Meng-Li Shih 2024



