| CARVIEW |
👁️ IRIS: Inverse Rendering of Indoor Scenes
from Low Dynamic Range Images
CVPR 2025
Tuotuo Li1  Michael Zollhöfer1  Johannes Kopf1  Shenlong Wang2  Changil Kim1 
Abstract
Inverse rendering seeks to recover 3D geometry, surface material, and lighting from captured images, enabling advanced applications such as novel-view synthesis, relighting, and virtual object insertion. However, most existing techniques rely on high dynamic range (HDR) images as input, limiting accessibility for general users. In response, we introduce IRIS, an inverse rendering framework that recovers the physically based material, spatially-varying HDR lighting, and camera response functions from multi-view, low-dynamic-range (LDR) images. By eliminating the dependence on HDR input, we make inverse rendering technology more accessible.
We evaluate our approach on real-world and synthetic scenes and compare it with state-of-the-art methods. Our results show that IRIS effectively recovers HDR lighting, accurate material, and plausible camera response functions, supporting photorealistic relighting and object insertion.
Results
Click and jump to:
Applications Ground Truth Comparisons Qualitative ComparisonsRelighting and Object Insertion Comparisons ↑
Our videos show the new light sources are reflected by the specular surfaces (e.g. white board, mirror).
Baseline FIPT* takes LDR images and our estimated emission masks as input.
FIPT takes HDR as input and serves as reference. However, HDR images are not available in some of the real scenes data.
Real Scenes
Methods (LDR input)
Methods (HDR input)
Applications
Material and Lighting Comparisons with Ground Truth ↑
To evaluate the quality of inverse rendering, we compare IRIS with multiple baselines on synthetic scenes from FIPT, where ground truth material, geometry, and lighting are available.
Synthetic Scenes
Methods (LDR input)
Methods (HDR input)
Results
Material and Lighting Qualitative Comparisons ↑
Baseline FIPT* takes LDR images and our estimated emission masks as input.
FIPT takes HDR as input and serves as reference. However, HDR images are not available in some of the real scenes data.
Real Scenes
Methods (LDR input)
Methods (HDR input)
Results
Framework Overview ↑
Given multi-view posed LDR images, our inverse rendering pipeline is divided into two main stages. In the initialization stage, we initialize the BRDF, extract a surface light field, and estimate emitter geometry. In the optimization stage, we first recover HDR radiance from the LDR input, then bake shading maps, and jointly optimize BRDF and CRF parameters. These three steps are repeated until convergence.
References ↑
Wu, Liwen, et al. "Factorized inverse path tracing for efficient and accurate material-lighting estimation." ICCV, 2023.
Yao, Yao, et al. "NeILF: Neural incident light field for physically-based material estimation." ECCV, 2022.
Zhu, Jingsen, et al. "I^2-SDF: Intrinsic indoor scene reconstruction and editing via raytracing in neural sdfs." CVPR, 2023.
Li, Zhengqin, et al. "Physically-based editing of indoor scene lighting from a single image." ECCV, 2022.