You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
SceneFormer: Indoor Scene Generation with Transformers
Initial code release for the Sceneformer paper, contains models, train and test scripts for the shape conditioned model. Text conditioned model and detailed README coming soon.
Install the requirements in requirements.txt and environment.yaml in a conda environment. Packages that are common can be installed either through
pip or conda.
Prepare Data
The SUNCG dataset is currently not available, hence all related files have been removed. The dataset can be prepared with the scripts which were taken from deepsynth.
Train
Configure the experiment in configs/scene_shift_X_config.yaml where X is one of cat, dim, loc, ori
Configure the model paths in scene_scripts/test.py and then run
python scene_scripts/test.py
If you find our work useful, please consider citing us:
@article{wang2020sceneformer,
title={SceneFormer: Indoor Scene Generation with Transformers},
author={Wang, Xinpeng and Yeshwanth, Chandan and Nie{\ss}ner, Matthias},
journal={arXiv preprint arXiv:2012.09793},
year={2020}
}
About
[3DV 2021 Oral] Sceneformer: Indoor Scene Generation with Transformers: Generate indoor scenes with Transformers