You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this work, we present a new solution termed RGBD2 that sequentially
generates novel RGBD views along a camera trajectory, and the scene geometry
is simply the fusion result of these views.
If you want to train with multiple GPUs, try setting, e.g. CUDA_VISIBLE_DEVICES=0,1,2,3.
We note that it visualizes the training process by producing some TensorBoard files.
Inference
To generate a test scene, simply run:
CUDA_VISIBLE_DEVICES=0 python experiments/run.py
By additionally providing --interactive, you can control the generation process via manual control using a GUI.
Our GUI code uses Matplotlib, so you can even run the code on a remote server, and use x-server (e.g. MobaXterm) to enable graphic control!
About
If you find our work useful, please consider citing our paper:
@InProceedings{Lei_2023_CVPR,
author = {Lei, Jiabao and Tang, Jiapeng and Jia, Kui},
title = {RGBD2: Generative Scene Synthesis via Incremental View Inpainting using RGBD Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023}
}
This repo is yet an early-access version which is under active update.
If you have any questions or needs, feel free to contact me, or just create a GitHub issue.
About
RGBD2: Generative Scene Synthesis via Incremental View Inpainting using RGBD Diffusion Models