I am currently a Ph.D. candidate at PSL Research University, Mines Paris, in the “Perception and Machine Learning” team of the Center for Robotics (CAOR), under the supervision of Prof. Fabien Moutarde. My Ph.D. is conducted in collaboration with Valeo as part of an industrial partnership.
My research focuses on improving the robustness of computer vision at nighttime using HD headlights. The objective is to leverage the ability to project HD patterns with the vehicle’s headlights and to identify the specific patterns that can provide valuable insights to computer vision models, enhancing their understanding of the scene in challenging conditions. Improving computer vision robustness at night is crucial to ensuring the safety of Autonomous Driving and Advanced Driver Assistance Systems.
Nighttime environments pose significant challenges for camera-based perception, as existing methods passively rely on the scene lighting. We introduce Lighting-driven Dynamic Active Sensing (LiDAS), a closed-loop active illumination system that combines off-the-shelf visual perception models with high‑definition headlights. Rather than uniformly brightening the scene, LiDAS dynamically predicts an optimal illumination field that maximizes downstream perception performance, i.e., decreasing light on empty areas to reallocate it on object regions. LiDAS enables zero-shot nighttime generalization of daytime-trained models through adaptive illumination control. Trained on synthetic data and deployed zero‑shot in real‑world closed‑loop driving scenarios, LiDAS enables +18.7% mAP50 and +5.0% mIoU over standard low‑beam at equal power. It maintains performances while reducing energy use by 40%. LiDAS complements domain‑generalization methods, further strengthening robustness without retraining. By turning readily available headlights into active vision actuators, LiDAS offers a cost‑effective solution to robust nighttime perception.
@article{deMoreau2025lidas,title={LiDAS: Lighting-driven Dynamic Active Sensing for Nighttime Perception},author={De Moreau, Simon and Bursuc, Andrei and El-Idrissi, Hafid and Moutarde, Fabien},journal={arXiv preprint arXiv:2512.08912},year={2025},}
DOC-Depth: A novel approach for dense depth ground truth generation
Simon
De Moreau, Mathias
Corsia, Hassan
Bouchiba, and
4 more authors
Oral Presentation at IEEE Intelligent Vehicles Symposium 2025
Accurate depth information is essential for many computer vision applications. Yet, no available dataset recording method allows for fully dense accurate depth estimation in a large scale dynamic environment. In this paper, we introduce DOC-Depth, a novel, efficient and easy-to-deploy approach for dense depth generation from any LiDAR sensor. After reconstructing consistent dense 3D environment using LiDAR odometry, we address dynamic objects occlusions automatically thanks to DOC, our state-of-the art dynamic object classification method. Additionally, DOC-Depth is fast and scalable, allowing for the creation of unbounded datasets in terms of size and time. We demonstrate the effectiveness of our approach on the KITTI dataset, improving its density from 16.1% to 71.2% and release this new fully dense depth annotation, to facilitate future research in the domain. We also showcase results using various LiDAR sensors and in multiple environments.
@article{deMoreau2025doc,title={DOC-Depth: A novel approach for dense depth ground truth generation},author={De Moreau, Simon and Corsia, Mathias and Bouchiba, Hassan and Almehio, Yasser and Bursuc, Andrei and El-Idrissi, Hafid and Moutarde, Fabien},journal={IEEE Intelligent Vehicles Symposium},year={2025},}
LED: Light Enhanced Depth Estimation at Night
Simon
De Moreau, Yasser
Almehio, Andrei
Bursuc, and
3 more authors
Nighttime camera-based depth estimation is a highly challenging task, especially for autonomous driving applications, where accurate depth perception is essential for ensuring safe navigation. Models trained on daytime data often fail in the absence of precise but costly LiDAR. Even vision foundation models trained on large amounts of data are unreliable in low-light conditions. In this work, we aim to improve the reliability of perception systems at night time. To this end, we introduce Light Enhanced Depth (LED), a novel, cost-effective approach that significantly improves depth estimation in low-light environments by harnessing a pattern projected by high definition headlights available in modern vehicles. LED leads to significant performance boosts across multiple depth-estimation architectures (encoder-decoder, Adabins, DepthFormer, Depth Anything V2) both on synthetic and real datasets. Furthermore, increased performances beyond illuminated areas reveal a holistic enhancement in scene understanding. Finally, we release the Nighttime Synthetic Drive Dataset, a synthetic and photo-realistic nighttime dataset, which comprises 49,990 comprehensively annotated images.
@article{deMoreau2025led,title={LED: Light Enhanced Depth Estimation at Night},author={De Moreau, Simon and Almehio, Yasser and Bursuc, Andrei and El-Idrissi, Hafid and Stanciulescu, Bogdan and Moutarde, Fabien},journal={BMVC},year={2025},}