You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this work, we introduce LiDARCrafter, a unified framework for 4D LiDAR generation and editing. We contribute:
The first 4D generative world model dedicated to LiDAR data, with superior controllability and spatiotemporal consistency.
We introduce a tri-branch 4D layout conditioned pipeline that turns language into an editable 4D layout and uses it to guide temporally stable LiDAR synthesis.
We propose a comprehensive evaluation suite for LiDAR sequence generation, encompassing scene-level, object-level, and sequence-level metrics.
We demonstrate best single-frame and sequence-level LiDAR point cloud generation performance on nuScenes, with improved foreground quality over existing methods.
📚 Citation
If you find this work helpful for your research, please kindly consider citing our paper:
@inproceedings{liang2026lidarcrafter,
title = {{LiDARCrafter}: Dynamic {4D} World Modeling from {LiDAR} Sequences},
author = {Ao Liang and Youquan Liu and Yu Yang and Dongyue Lu and Linfeng Li and Lingdong Kong and Huaici Zhao and Wei Tsang Ooi},
booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)},
volume = {40},
year = {2026},
}
Updates
[11.2025] - LiDARCrafter has been accepted to AAAI 2026 for Oral Presentation. 🎉
[10.2025] - We will soon start organizing the code. All pretrained weights for evaluation can be found at Hugging Face.
[08.2025] - The technical report of LiDARCrafter is available on arXiv.
python evaluation/extract_foreground_samples.py --model ori
🔧 Generation Framework
Overall Framework
4D Layout Generation
Single-Frame Generation
🐍 Model Zoo
To be updated.
📝 TODO List
Initial release. 🚀
Release the training code.
Release the inference code.
Release the evaluation code.
Refine the Readme.md
License
This work is under the Apache License Version 2.0, while some specific implementations in this codebase might be under other licenses. Kindly refer to LICENSE.md for a more careful check, if you are using our code for commercial matters.
Acknowledgements
This work is developed based on the MMDetection3D codebase.
MMDetection3D is an open-source toolbox based on PyTorch, towards the next-generation platform for general 3D perception. It is a part of the OpenMMLab project developed by MMLab.
Part of the benchmarked models are from the OpenPCDet and 3DTrans projects.