| CARVIEW |
Select Language
HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://jdk9405.github.io/IM360/
x-github-request-id: 8A37:234FE9:9233CA:A3E154:6952ABFE
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 16:27:42 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210095-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767025662.422828,VS0,VE200
vary: Accept-Encoding
x-fastly-request-id: f15c12daa53f6bc2f4b62376452c7c34f4ddce58
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Thu, 26 Jun 2025 07:41:16 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"685cf99c-711f"
expires: Mon, 29 Dec 2025 16:37:42 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 4C60:1F53DD:911614:A2C323:6952ABFE
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 16:27:42 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210095-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767025663.636965,VS0,VE239
vary: Accept-Encoding
x-fastly-request-id: 0d5e3c0f972277a20f2dbee0de1c7d20403e1bbc
content-length: 5832
IM360: Large-scale Indoor Mapping with 360 Cameras
IM360: Large-scale Indoor Mapping with 360 Cameras
Abstract
We present a novel 3D reconstruction pipeline for 360° cameras for 3D mapping and rendering of indoor environments. Traditional Structure-from-Motion (SfM) methods may not work well in large-scale indoor scenes due to the prevalence of textureless and repetitive regions. To overcome these challenges, our approach (IM360) leverages the wide field of view of omnidirectional images and integrates the spherical camera model into every core component of the SfM pipeline. In order to develop a comprehensive 3D reconstruction solution, we integrate a neural implicit surface reconstruction technique to generate high-quality surfaces from sparse input data. Additionally, we utilize a mesh-based neural rendering approach to refine texture maps and accurately capture view-dependent properties by combining diffuse and specular components. We evaluate our pipeline on large-scale indoor scenes from the Matterport3D and Stanford2D3D datasets. In practice, IM360 demonstrate superior performance in terms of textured mesh reconstruction over SOTA. We observe accuracy improvements in terms of camera localization and registration as well as rendering high frequency details.
Results: Neural Surface Reconstruction
Using accurately estimated camera poses from a 360-degree vision sensor, we can construct a geometric mesh from sparsely scanned, large-scale indoor datasets.
🚀 Try the Online Interactive Viewer Demo 🔍
|
|
|
|
|
Thanks to the use of mesh and neural textures, our method is easily compatible with popular graphics frameworks such as WebGL. This demonstrates the potential for our method to be integrated with widely available graphics pipelines.
Please note that the reconstructed mesh has been compressed to enable a smooth in-browser experience. We decimated 50% of the mesh triangles. Thus, the visual quality is slightly degraded compared to the visual results. ⚠️ If the renderer does not work properly, please press Ctrl + Shift + R to force-refresh the page. You may also open the Developer Console (F12) and check the logs — sometimes the viewer loads after a short delay.
Results: Texture Optimization