You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This paper proposes Sparse2DGS, a Gaussian Splatting method tailored for surface reconstruction from sparse input views. Traditional methods relying on dense views and SfM points struggle under sparse-view conditions. While learning-based MVS methods can produce dense 3D points, directly combining them with Gaussian Splatting results in suboptimal performance due to the ill-posed nature of the geometry. Sparse2DGS addresses this by introducing geometry-prioritized enhancement schemes to enable robust geometric learning even in challenging sparse settings.
We use the unsupervised CLMVSNet to provide MVS priors. Before running, you need download the pre-trained weights and set the dataset and weight paths in MVS/config.yaml.
Set the dtu_path in scripts/train_all.py, and run the script to train the model on multiple GPUs.
python ./scripts/train_all.py
Use the following command to render the mesh of each scene:
python ./scripts/render_all.py
💻 Evaluation
For evaluation, first download the DTU ground truth, which includes the reference point clouds, and the 2DGS data, which contains scene masks and transformation matrices. Then set the corresponding paths in scripts/eval_dtu.py.
Please cite our paper if you use the code in this repository:
@article{wu2025sparse2dgs,
title={Sparse2DGS: Geometry-Prioritized Gaussian Splatting for Surface Reconstruction from Sparse Views},
author={Wu, Jiang and Li, Rui and Zhu, Yu and Guo, Rong and Sun, Jinqiu and Zhang, Yanning},
journal={arXiv preprint arXiv:2504.20378},
year={2025}
}
📨 Acknowledgments
This code is based on 2DGS and CLMVSNet, and we sincerely thank the authors‘ contributions.