You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1University of Maryland, College Park, USA, 2Adobe Research
This is the implementation of ShapeMove, a framework for generating body-shape-aware human motion from text. ShapeMove combines a quantized VAE with continuous shape conditioning and a pretrained language model to synthesize realistic, shape-aligned motions from natural language descriptions.
This step will download our pretrained ShapeMove model trained with the AMASS dataset, the flan-t5-base language model, and the SMPL neutral model for visualization.
📐 Inference Model
bash scripts/demo.sh
The output motion and shape beta will be saved under outputs.
After installing blender and required packages in the python environment of blender, run the following command to ensure installation:
blender --background --version
This should return Blender 2.93.18.
Render Meshes with Blender
# generate mesh with given beta and motion .npy file
python -m utils.mesh --dir [path/to/inference/output/folder]
# generate image from blender (with obj/ply file)
blender --background -noaudio --python utils/blender_render.py -- --mode=video --dir [path/to/mesh/folder]
# gather generated image and make video
python utils/visualization.py --dir [path/to/mesh/folder]
Citations
Shape My Moves (this work)
@article{shapemove,
author = {Liao, Ting-Hsuan and Zhou, Yi and Shen, Yu and Huang, Chun-Hao Paul and Mitra, Saayan and Huang, Jia-Bin and Bhattacharya, Uttaran},
title = {Shape My Moves: Text-Driven Shape-Aware Synthesis of Human Motions},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2025}
}