Yuheng Jiang, Zhehao Shen, Yu Hong, Chengcheng Guo, Yize Wu, Yingliang Zhang, Jingyi Yu, Lan Xu
| Webpage | Full Paper |
Video |

Official implementation of DualGS (Robust Dual Gaussian Splatting for Immersive Human-centric Volumetric Videos)
We present a novel Gaussian-Splatting-based approach, dubbed DualGS, for real-time and high-fidelity playback of complex human performance with excellent compression ratios. Our key idea in DualGS is to separately represent motion and appearance using the corresponding skin and joint Gaussians. Such an explicit disentanglement can significantly reduce motion redundancy and enhance temporal coherence. We begin by initializing the DualGS and anchoring skin Gaussians to joint Gaussians at the first frame. Subsequently, we employ a coarse-to-fine training strategy for frame-by-frame human performance modeling. It includes a coarse alignment phase for overall motion prediction as well as a fine-grained optimization for robust tracking and high-fidelity rendering.
The repository contains submodules, thus please check it out with
# HTTPS
git clone https://github.com/HiFi-Human/DualGS.git --recursiveor
# SSH
git clone git@github.com:HiFi-Human/DualGS.git --recursiveOur provided install method is based on Conda package and environment management:
Create a new environment
conda create -n dualgs python=3.10
conda activate dualgsFirst install CUDA and PyTorch, our code is evaluated on CUDA 11.8 and PyTorch 2.1.2+cu118. Then install the following dependencies:
conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install submodules/diff-gaussian-rasterization
pip install submodules/simple-knn
pip install submodules/fused-ssim
pip install -r requirements.txtOur code mainly evaluated on multi-view human centric datasets including HiFi4G and DualGS datasets. Please download the data you needed.
The overall file structure is as follows:
<location>
├── image_white
│ ├── %d - The frame number, starts from 0.
│ │ └──%d.png - Masked RGB images for each view. View number starts from 0.
│ └── transforms.json - Camera extrinsics and intrinsics in instant-NGP format.
│
├── image_white_undistortion
│ ├── %d - The frame number, starts from 0.
│ │ └──%d.png - Undistorted maksed RGB images for each view. View number starts from 0.
│ └── colmap/sparse/0 - Camera extrinsics and intrinsics in Gaussian Splatting format.python train.py \
-s <path to HiFi4G or DualGS dataset> -m <output path> \
--frame_st 0 --frame_ed 1000 \
--iterations 30000 --subseq_iters 15000 \
--training_mode 0 \
--parallel_load \
-r 2| Parameter | Type | Description |
|---|---|---|
--iterations |
int | Total training iterations for the first frame covering both stages: - Stage 1: JointGS training. - Stage 2: SkinGS training. |
--subseq_iters |
int | Training iterations per frame for subsequent frames after the first frame. |
--frame_st |
int | Start frame number. |
--frame_ed |
int | End frame number. |
--training_mode |
{0,1,2} | Training pipeline selection: - 0: Both stage training (JointGS + SkinGS).- 1: JointGS only.- 2: SkinGS only. |
--ply_path |
str | The path to the point cloud used for initialization (defaults to points3d.ply in the dataset). |
--motion_folder |
str | If you already have a trained JointGS and want to train only SkinGS, you can use this parameter to manually specify the path to JointGS. |
--parallel_load |
flag | Enables multi-threaded image loading during dataset loading. |
--seq |
flag | By default, training warps each frame from the first frame to the n-th frame. Enabling this parameter will instead warp from the (n-1)-th frame to the n-th frame. |
The results are as follows:
<location>
├── track
│ └── ckt - The results of JointGS.
│ │ ├── point_cloud_0.ply
│ │ ...
│ │ └── point_cloud_n.ply
│ ├── cameras.json
│ └── cfg_args
├── ckt - The results of SkinGS.
│ ├── point_cloud_0.ply
│ ...
│ └── point_cloud_n.ply
├── joint_opt - The RT matrix of JointGS after stage2 optimization.
│ ├── joint_RT_0.npz
│ ...
│ └── joint_RT_n.npz
├── cameras.json
└── cfg_argspython render.py -m <path to trained model> -st <start frame number> -e <end frame number> --parallel_load # Generate renderingsYou are able to select the desired views by using the --views parameter.
python scripts/evaluation.py -g <path to gt> -r <path to renderings> # Compute error metrics on renderingsOur modified Viewer is located in the DynamicGaussianViewer/ directory.
The build process is identical to that of the official Gaussian Splatting repository.
To compile the viewer, please follow the official instructions:
👉 https://github.com/graphdeco-inria/gaussian-splatting
cd DynamicGaussianViewer/
# Follow the same steps as in the official repo to build the viewer./install/bin/SIBR_gaussianViewer_app_rwdi.exe -m <path to the folder where cfg_args and cameras.json exist> -d <path to point clouds folder> -start <start frame> -end <end frame>
# optional: --step 1 --rendering-size 1920 1080
We would like to thank the authors of Taming 3DGS for their excellent implementation, which was used in our project to replace the original 3DGS for acceleration.
This project contains code from multiple sources with distinct licensing terms:
The portions of code derived from the original Gaussian Splatting implementation are licensed under the Gaussian Splatting Research License.
📄 See: LICENSE.original
All code modifications, extensions, and new components developed by our team are licensed under MIT License.
📄 See: LICENSE
@article{jiang2024robust,
title={Robust dual gaussian splatting for immersive human-centric volumetric videos},
author={Jiang, Yuheng and Shen, Zhehao and Hong, Yu and Guo, Chengcheng and Wu, Yize and Zhang, Yingliang and Yu, Jingyi and Xu, Lan},
journal={ACM Transactions on Graphics (TOG)},
volume={43},
number={6},
pages={1--15},
year={2024},
publisher={ACM New York, NY, USA}
}

