You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Enhancing End-to-End Autonomous Driving with Latent World Model (ICLR 2025)
Yingyan Li, Lue Fan, Jiawei He, Yuqi Wang, Yuntao Chen, Zhaoxiang Zhang and Tieniu Tan
This Paper presents the LAtent World model (LAW), a self-supervised framework that predicts future scene features from current features and ego trajectories.
🔧 Installation
1. Create a Conda Virtual Environment and Activate It
conda create -n law python=3.8 -y
conda activate law
Please consider citing our work as follows if it is helpful.
@misc{li2024enhancing,
title={Enhancing End-to-End Autonomous Driving with Latent World Model},
author={Yingyan Li and Lue Fan and Jiawei He and Yuqi Wang and Yuntao Chen and Zhaoxiang Zhang and Tieniu Tan},
year={2024},
eprint={2406.08481},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
More from Us
If you're interested in world models for autonomous driving, or looking for a world model codebase on NAVSIM, feel free to check out our latest work:
WoTE (ICCV 2025): Using BEV world models for online trajectory evaluation in end-to-end autonomous driving.
About
(ICLR2025) Enhancing End-to-End Autonomous Driving with Latent World Model