| CARVIEW |
Select Language
HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://yxmu.foo/GenMoStyle/
x-github-request-id: 8AB4:2BC55:76D175:8521ED:6950ED1B
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 08:41:04 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210048-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766911264.975397,VS0,VE201
vary: Accept-Encoding
x-fastly-request-id: 648d9fa6f3def00ec1a1cf5b3e04751279839cd4
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Mon, 02 Dec 2024 20:18:52 GMT
access-control-allow-origin: *
etag: W/"674e162c-3c44"
expires: Sun, 28 Dec 2025 08:51:04 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: AC35:3FD64F:768C03:84DEE2:6950ED16
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 08:41:04 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210048-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766911264.195519,VS0,VE221
vary: Accept-Encoding
x-fastly-request-id: 702561034756258d47d5ecd658cb2020d93c77c5
content-length: 4093
Generative Human Motion Stylization in Latent Space
Generative Human Motion Stylization in Latent Space
ICLR 2024
1University of Alberta 
2Noah's Ark Lab, Huawei Canada
Label-based Stylization Gallery
Left: Content motion   Right: Stylized motion
Style: Zombie
Style: Sneaky
Style: FemaleModel
Style: Old
Label-based Stylization (Diverse)
Motion-based Stylization
Prior-based Stylization
A global probabilistic style space, confined by a prior Gaussian distribution, is established through our learning scheme. Our work can then randomly sample styles from the prior distribution to achieve stochastic stylization.
Probabilistic Style Space
We highlight the features of our probabilistic style space by showcasing its diverse stylization capacity and style interpolation ability.
Application: Text2Motion Stylization
We showcase the generalization ability of our method to stylize the OOD motions generated from an off-the-shelf T2M model.
Content Motion Generation Works 🚀🚀
MoMask: Swift text-driven motion generation through mask generative modeling.
TM2D: Learning dance generation with textual instruction.
Action2Motion: Diverse action-conditioned motion generation.
TM2D: Learning dance generation with textual instruction.
Action2Motion: Diverse action-conditioned motion generation.
BibTeX
@inproceedings{
guo2024generative,
title={Generative Human Motion Stylization in Latent Space},
author={chuan guo and Yuxuan Mu and Xinxin Zuo and Peng Dai and Youliang Yan and Juwei Lu and Li Cheng},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=daEqXJ0yZo}
}

