You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Download the data with semantic annotations in google drive and save the data into the ./data/replica folder. We only provide a subset of Replica dataset. For all Replica data generation, please refer to directory data_generation.
Download the pretrained segmentation network in google drive and save it into the ./seg folder (unzip seg/facebookresearch_dinov2_main.zip),
and you can run SNI-SLAM:
The result of the visualization will be saved at output/Replica/room1/vis.mp4. The green trajectory indicates the ground truth trajectory, and the red one is the trajectory of SNI-SLAM.
Visualizer Command line arguments
--output $OUTPUT_FOLDER output folder (overwrite the output folder in the config file)
--top_view set the camera to top view. Otherwise, the camera is set to the first frame of the sequence
--save_rendering save rendering video to vis.mp4 in the output folder
--no_gt_traj do not show ground truth trajectory
Citing
If you find our code or paper useful, please consider citing:
@inproceedings{zhu2024sni,
title={Sni-slam: Semantic neural implicit slam},
author={Zhu, Siting and Wang, Guangming and Blum, Hermann and Liu, Jiuming and Song, Liang and Pollefeys, Marc and Wang, Hesheng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={21167--21177},
year={2024}
}
About
[CVPR 2024 & TPAMI 2025] SNI-SLAM: Semantic Neural Implicit SLAM