You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
DOC-Depth: A novel approach for dense depth ground truth generation
Official implementation of the DOC-Depth method.
If you use our method in your research, please cite :
@inproceedings{deMoreau2024doc,
title = {DOC-Depth: A novel approach for dense depth ground truth generation},
author = {De Moreau, Simon and Corsia, Mathias and Bouchiba, Hassan and Almehio, Yasser and Bursuc, Andrei and El-Idrissi, Hafid and Moutarde, Fabien},
booktitle = {2025 IEEE Intelligent Vehicles Symposium (IV)},
year = {2025},
}
Dense Depth KITTI annotations
Please visit our project page to download the dense annotations of KITTI.
Calibration
The first step of the pipeline is to calibrate together LiDAR and Camera. See the Calibration folder to use our tool.
Recording
The easiest way to record your dataset is to use ROS to record all your sensors into ".bag" files.
Preprocessing
After recording, you must use our pre-processing pipeline with SLAM and DOC to obtain a dense and classified reconstruction of your record. See the Preprocessing folder for more informations.
Rendering
Finally, you can use our tool to apply our composite rendering to the classified LiDAR frames and obtain your dense depth. See the Rendering folder to access our tool.