You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following instructions describe installation of conda environment. Please refer to requirement.
.
Data
Download the data from the official websites of KITTI, Waymo, Argo, Cityscape, or you can try the examples provided by us here and unpack in the Data directory.
*For devices without a display, training and testing can be done using headless rendering, but the efficiency will be 2-3 times slower. It is preferable to use devices with a display for rendering.
The size of crop_size depends on your GPU memory and the parameter train_dataset_args can be adjusted in the configs folder.
For high-resolution images, it is necessary first to train them with a low-resolution downsampled version and then train them at a higher resolution. For example, during the initial training, the range of random_zoom is set to 1.0-2.0 with num_samples=3000. Then, the model is loaded and fine-tuned with a modified range of random_zoom set to 0.7-2.0 and num_samples=6000.
In this code we refer to the following implementations: nerfstudio and READ. Great thanks to them!
Citation
If our work or code helps you, please consider to cite our paper. Thank you!
@article{li2024dgnr,
title={DGNR: Density-Guided Neural Point Rendering of Large Driving Scenes},
author={Li, Zhuopeng and Wu, Chenming and Zhang, Liangjun and Zhu, Jianke},
journal={IEEE Transactions on Automation Science and Engineering},
year={2024},
publisher={IEEE}
}