| CARVIEW |
MAtCha Gaussians:
Atlas of Charts for High-Quality Geometry and Photorealism From Sparse Views
CVPR 2025 (Highlight)
Antoine Guédon1
Tomoki Ichikawa2
Kohei Yamashita2
Ko Nishino2
We propose MAtCha Gaussians, a novel surface representation for reconstructing
high-quality 3D meshes with photorealistic rendering from sparse-view images.
Our key idea is to model the underlying scene geometry as an Atlas of Charts which we render with 2D
Gaussian surfels.
We initialize the charts with a monocular depth estimation model and refine them using
differentiable Gaussian rendering and a lightweight neural chart deformation model.
Combined with a sparse-view SfM model like
MASt3R-SfM,
MAtCha can recover sharp and accurate surface meshes
of both foreground and background objects in unbounded scenes within minutes, only from
a few unposed RGB images.
Updates
04-2025
MAtCha has been selected for a Highlight at CVPR 2025!04-2025
We released MAtCha's code!03-2025
MAtCha paper has been accepted to CVPR 2025!12-2024
Initial release of the paper.Overview
Garden (Mip-NeRF 360) - 10 training images
Bicycle (Mip-NeRF 360) - 10 training images
Bonsai (Mip-NeRF 360) - 5 training images
Scan 21 (DTU) - 3 training images
Gundam (Custom Scene) - 10 training images
Scan 24 (DTU) - 3 training images
We present a novel appearance model that simultaneously realizes explicit high-quality 3D surface mesh recovery
and photorealistic novel view synthesis from sparse view samples.
Our key idea is to model the underlying scene geometry Mesh as an Atlas of Charts
which we render with 2D Gaussian surfels (MAtCha Gaussians).
MAtCha distills high-frequency scene surface details from an off-the-shelf monocular depth estimator
and refines it through 2D Gaussian surfel rendering.
The Gaussian surfels are attached to the charts on the fly, satisfying photorealism of neural volumetric rendering
and crisp geometry of a mesh model, i.e., two seemingly contradicting goals in a single model.
At the core of MAtCha lies a novel neural deformation model and a structure loss that preserve the fine surface details
distilled from learned monocular depths while addressing their fundamental scale ambiguities.
Results of extensive experimental validation demonstrate MAtCha's state-of-the-art quality of surface reconstruction
and photorealism on-par with top contenders but with dramatic reduction in the number of input views and computational time.
We believe MAtCha will serve as a foundational tool for any visual application in vision, graphics,
and robotics that require explicit geometry in addition to photorealism.
Optimizing Charts in a Sparse-View Scenario
Given a few RGB images and their camera poses obtained using a sparse-view SfM method such as MASt3R-SfM, we first initialize charts using a pretrained monocular depth estimation model. Each chart is represented as a mesh equipped with a UV map, mapping a 2D plane to the 3D surface.
We then optimize our charts and enforce their alignment with input SfM data using three key components:
- Depth encodings stored along a 1D axis for encouraging points with similar initial depth to be deformed together.
- Charts encodings stored in a sparse 2D grid in UV space for efficiently deforming the geometry while preserving high-frequency surface details visible in the initial depth maps.
- Confidence maps for each chart, for automatically identifying outliers in the input SfM data.
Our aligned charts provide a sharp, dense and accurate estimate of the 3D scene, which can be further refined using input images and a Gaussian Splatting-based rendering pipeline. Our representation allows for reconstructing high-quality surface meshes within minutes, even in sparse-view scenarios.
Extracting Meshes from MAtCha Gaussians
(a) Multi-resolution TSDF fusion (10 training views)
(b) Adaptive tetrahedralization (10 training views)
Most existing methods relying on 3D Gaussians or 2D Gaussian Surfels
apply TSDF fusion on rendered depth maps to extract a mesh from the volumetric representation.
However, TSDF fusion is limited to bounded scenes and does not allow for extracting high-quality meshes
including both foreground and background objects of the scene.
Moreover, applying TSDF fusion on 2D Gaussian Surfels can over-smooth the geometry, erode fine details,
and produce artifacts, such as "disk-aliasing" patterns on the surface.
In this regard, while we propose (a) a custom multi-resolution TSDF fusion including foreground and background objects in our implementation,
we also propose (b) to adapt the tetrahedralization from
Gaussian Opacity Fields (GOF)
to make it compatible with any Gaussian-based method capable of rendering perspective-accurate depth maps.
- First, we propose to change the definition of the opacity field, using depth maps instead of 3D Gaussians as in GOF: For any set of input depth maps, we define a binary opacity field from the depth maps as well as an adaptive dilation operation to avoid eroding geometry during mesh extraction.
- Second, because the tetrahedralization introduced in GOF generally produces very large meshes with more than 10M vertices, we propose a new sampling strategy to build the initial tetrahedron grid to easily adjust or lower the resolution of the output mesh.
BibTex
If you find this work useful for your research, please cite:
@article{guedon2025matcha,
title={MAtCha Gaussians: Atlas of Charts for High-Quality Geometry and Photorealism From Sparse Views},
author={Gu{\'e}don, Antoine and Ichikawa, Tomoki and Yamashita, Kohei and Nishino, Ko},
journal={CVPR},
year={2025},
}
Further information
If you like this project, check out our previous works related to 3D reconstruction and Gaussian Splatting:
- Guédon and Lepetit. - Gaussian Frosting: Editable Complex Radiance Fields with Real-Time Rendering (ECCV 2024 - Oral)
- Guédon and Lepetit. - SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering (CVPR 2024)
Acknowledgements
This work was in part supported by
JSPS 20H05951 and 21H04893,
and JST JPMJCR20G7 and JPMJAP2305.
This work was also in part supported by the ERC grant "explorer" (No. 101097259).
This work was granted access to the HPC resources of IDRIS under the allocation
2024-AD011013387R2 made by GENCI.
© You are welcome to copy the code of the webpage, please attribute the source with a link
back to this page and remove the analytics.