| CARVIEW |
Select Language
HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://skhu101.github.io/ConsistentNeRF/
x-github-request-id: 095E:3157C7:91F3EE:A3DB79:6952CAEE
accept-ranges: bytes
date: Mon, 29 Dec 2025 18:39:42 GMT
via: 1.1 varnish
age: 0
x-served-by: cache-bom-vanm7210050-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767033583.780174,VS0,VE197
vary: Accept-Encoding
x-fastly-request-id: e96250bb89a6babc459528e5ac7bc91a48a517ae
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Fri, 19 May 2023 04:17:20 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"6466f850-8d02"
expires: Mon, 29 Dec 2025 18:49:43 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 38CD:3827E5:93C2A9:A5BF20:6952CAEE
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 18:39:43 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210050-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767033583.991077,VS0,VE225
vary: Accept-Encoding
x-fastly-request-id: bcd4151eef086626f9f8c12ef2d0658c25d734a1
content-length: 5744
ConsistentNeRF - Project Page
Figure 1. ConsistentNeRF Framework.
ConsistentNeRF: Enhancing Neural Radiance Fields with 3D Consistency for Sparse View Synthesis
- Shoukang Hu1
- Kaichen Zhou2
- Kaiyu Li1
- Longhui Yu2
- Lanqing Hong4
- Tianyang Hu4
- Zhenguo Li✉4
- Gim Hee Lee✉5
- Ziwei Liu✉1
-
1S-Lab, Nanyang Technological University 2University of Oxford
3Peking University 4Huawei Noah's Ark Lab 5National University of Singapore
- ✉Corresponding Author
TL;DR: ConsistentNeRF Enhances Neural Radiance Fields with 3D Consistency for Sparse View Synthesis.
Abstract
Neural Radiance Fields (NeRF) has demonstrated remarkable 3D reconstruction capabilities with dense view images. However, its performance significantly deteriorates under sparse view settings. We observe that learning the 3D consistency of pixels among different views is crucial for improving reconstruction quality in such cases. In this paper, we propose ConsistentNeRF, a method that leverages depth information to regularize both multi-view and single-view 3D consistency among pixels. Specifically, ConsistentNeRF employs depth-derived geometry information and a depth-invariant loss to concentrate on pixels that exhibit 3D correspondence and maintain consistent depth relationships. Extensive experiments on recent representative works reveal that our approach can considerably enhance model performance in sparse view conditions, achieving improvements of up to 94% in PSNR, 76% in SSIM, and 31% in LPIPS compared to the vanilla baselines across various benchmarks, including DTU, NeRF Synthetic, and LLFF.
Links
DTU Dataset (3 View Input)
| NeRF | DSNeRF | DietNeRF | InfoNeRF | |
|
|
|
|
|
|
| MipNeRF | RegNeRF | ConsistentNeRF (Ours) | ||
|
|
|
|
NeRF Dataset (3 View Input)
| NeRF | DSNeRF | DietNeRF | InfoNeRF | |
|
|
|
|
|
|
| MipNeRF | RegNeRF | ConsistentNeRF (Ours) | ||
|
|
|
|
LLFF Dataset (3 View Input)
| NeRF | DSNeRF | DietNeRF | InfoNeRF | |
|
|
|
|
|
|
| MipNeRF | RegNeRF | ConsistentNeRF (Ours) | ||
|
|
|
|
Method Overview
Figure 1. ConsistentNeRF Framework.
We regularize multi-view 3D consistency by utilizing the multi-view depth correspondence among different views to mask pixels satisfying 3D correspondence (the red point) or not (the green point) and construct the loss based on the mask information. We also regularize single-view 3D consistency by constructing a depth scale-invariant loss function based on the monocular depth predicted from state-of-the-art MiDas model.

