You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is the official code for the CVPR 2024 paper "GALA: Generating Animatable Layered Assets from a Single Scan".
News
[2024/01/24] Initial release.
Installation
Setup the environment using conda. We used a single 24GB gpu in our work, but you may adjust the batch size to fit your gpus.
conda env create -f env.yaml
conda activate gala
Install and download required libraries and data. For downloading SMPL-X, you must register here. Installing xformers reduces training time, but it takes extremely long. Remove it from "scripts/setup.sh" if needed.
bash scripts/setup.sh
Download "ViT-H HQ-SAM model" checkpoint here, and place it in ./segmentation.
Running the code
Prepare THuman2.0 Dataset
We use THuman2.0 in our demo since it's publicly accessible. The same pipeline also works for commercial dataset like RenderPeople, as used in our paper. Get access to Thuman2.0 scans and smplx parameters here and organize the folder as below.
You can check the outputs in ./results. You can modify input text conditions in "config/th_0001_geo.yaml" or "config/th_0001_tex.yaml", and change experimental settings in "config/default_geo.yaml" or "config/default_geo.yaml".
Citation
If you find this work useful, please cite our paper:
@inproceedings{kim2024gala,
title={Gala: Generating animatable layered assets from a single scan},
author={Kim, Taeksoo and Kim, Byungjun and Saito, Shunsuke and Joo, Hanbyul},
booktitle={CVPR},
year={2024}
}