You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
2020 TPAMI, SurfaceNet+ is a volumetric learning framework for the very sparse MVS. The sparse-MVS benchmark is maintained here. Authors: Mengqi Ji#, Jinzhi Zhang#, Qionghai Dai, Lu Fang.
Proposed a trainable occlusion-aware view selection scheme for the volumetric MVS method, e.g., SurfaceNet[5].
Analysed the advantages of the volumetric methods, e.g., SurfaceNet[5] and SurfaceNet+, on the Sparse-MVS problem over the depth-fusion methods, e.g., Gipuma [6], R-MVSNet[7], Point-MVSNet[8], and COLMAP[9].
Fig.1: Illustration of a very sparse MVS setting using only $1/7$ of the camera views, i.e., ${v_i}_{i=1,8,15,22,...}$, to recover the model 23 in the DTU dataset [10]. Compared with the state-of-the-art methods, the proposed SurfaceNet+ provides much complete reconstruction, especially around the boarder region captured by very sparse views.
Fig.2: Comparison with the existing methods in the DTU Dataset [10] with different sparsely sampling strategy. When Sparsity = 3 and Batchsize = 2, the chosen camera indexes are 1,2 / 4,5 / 7,8 / 10,11 / .... SurfaceNet+ constantly outperforms the state-of-the-art methods at all the settings, especially at the very sparse scenario.
Fig.3: Results of a tank model in the Tanks and Temples 'intermediate' set [23] compared with R-MVSNet [7] and COLMAP [9], which demonstrate the power of SurfaceNet+ of high recall prediction in the sparse-MVS setting.
Citing
If you find SurfaceNet+, the Sparse-MVS benchmark, or SurfaceNet useful in your research, please consider citing:
@article{ji2020surfacenet_plus,
title={SurfaceNet+: An End-to-end 3D Neural Network for Very Sparse Multi-view Stereopsis},
author={Ji, Mengqi and Zhang, Jinzhi and Dai, Qionghai and Fang, Lu},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2020},
publisher={IEEE}
}
@inproceedings{ji2017surfacenet,
title={SurfaceNet: An End-To-End 3D Neural Network for Multiview Stereopsis},
author={Ji, Mengqi and Gall, Juergen and Zheng, Haitian and Liu, Yebin and Fang, Lu},
booktitle={Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
pages={2307--2315},
year={2017}
}
About
2020 TPAMI, SurfaceNet+ is a volumetric learning framework for the very sparse MVS. The sparse-MVS benchmark is maintained here. Authors: Mengqi Ji#, Jinzhi Zhang#, Qionghai Dai, Lu Fang.