You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A README of the dataset can be found in the loaders folder.
Datasets can also be downloaded using the download_datasets.py script. With flags --scenes o1 o2 o3, replacing o1, o2 and o3 with scenes you want to download. You can use shorthand all, captured or simulated or otherwise specify scenes by their names.
To see the summary during training, run the following
tensorboard --logdir=./results/ --port=6006
You can then evaluate the same scene (to get quantitative and image results) with
python eval.py -c="./configs/train/captured/cinema_two_views.ini" -tc="./configs/test/simulated/lego_quantitative.ini" --checkpoint_dir=<trained model directory root>
Files
train.py main training script containing training loop
utils.py contains rendering function render_transient, which calls the occgrid to generate sample points and then samples them
misc/transient_volrend.py called by utils contains the rendering code i.e. the whole image formation model, including convolution with pulse in mapping_dist_to_bin_mitsuba
Changes
I have realised that the models for the paper (the captured ones) were a bit undertrained (150k iterations), the configs thus train for longer than suggested in the paper (500k iterations). The difference is mainly important for the 5 views case, where PSNR increases by ~3/4dB.
Citation
@inproceedings{malik2023transient,
title = {Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction},
author = {Anagh Malik and Parsa Mirdehghan and Sotiris Nousias and Kiriakos N. Kutulakos and David B. Lindell},
journal = {NeurIPS},
year = {2023}
}
Acknowledgments
We thank NerfAcc for their implementation of Instant-NGP.
About
[NeurIPS 2023, Spotlight] Official code release for Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction.