You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 2, 2024. It is now read-only.
NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination
Xiuming Zhang, Pratul P. Srinivasan, Boyang Deng, Paul Debevec, William T. Freeman, Jonathan T. Barron TOG 2021 (Proc. SIGGRAPH Asia)
This is not an officially supported Google product.
Setup
Clone this repository:
git clone https://github.com/google/nerfactor.git
Install a Conda environment with all dependencies:
cd nerfactor
conda env create -f environment.yml
conda activate nerfactor
Tips:
You can find the TensorFlow, cuDNN, and CUDA versions in environment.yml.
The IPython dependency in environment.yml is for IPython.embed() alone.
If you are not using that to insert breakpoints during debugging, you can
take it out (it should not hurt to just leave it there).
Downloads
If you are using our data, metadata, or pre-trained models
(new as of 07/17/2022), see the "Downloads" section of the
project page.
If you are BYOD'ing (bringing your own data), go to data_gen/ to
either render your own synthetic data or process your real captures.
Running the Code
Go to nerfactor/ and follow the instructions there.
Evaluation (New as of 09/10/2022)
We were contacted a few times about the numbers reported in Table 1.
Here are the four sequences we used for generating those numbers:
drums_3072, ficus_2188, hotdog_2163, lego_3072, all of which have
been released (see the Data section above).
For all sequences, we used these validation views: 0,1,2,3,4,5,6,7
and these (uniformly sampled) test views: 49,99,149,199.
Issues or Questions?
If the issue is code-related, please open an issue here.
For questions, please also consider opening an issue as it may benefit future
reader. Otherwise, email Xiuming Zhang.