You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our MINE takes a single image as input and densely reconstructs the frustum of the camera, through which we can easily render novel views of the given scene:
Download the pre-downsampled version of the LLFF dataset from Google Drive, unzip it and put it in the root of the project, then start training by running the following command:
You may find the tensorboard logs and checkpoints in the sub-working directory (WORKSPACE + VERSION).
Apart from the LLFF dataset, we experimented on the RealEstate10K, KITTI Raw and the Flowers Light Fields datasets - the data pre-processing codes and training flow for these datasets will be released later.
Running our pretrained models:
We release the pretrained models trained on the RealEstate10K, KITTI and the Flowers datasets:
If you find our work helpful to your research, please cite our paper:
@inproceedings{mine2021,
title={MINE: Towards Continuous Depth MPI with NeRF for Novel View Synthesis},
author={Jiaxin Li and Zijian Feng and Qi She and Henghui Ding and Changhu Wang and Gim Hee Lee},
year={2021},
booktitle={ICCV},
}
About
Code and models for our ICCV 2021 paper "MINE: Towards Continuous Depth MPI with NeRF for Novel View Synthesis"