You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Run below commands to generate whatever stylized 4D human avatar you want!
python clip_actor.py --prompt "a scuba diver is scuba diving" --exp_name scuba_diving
python clip_actor.py --prompt "Freddie Mercury is dancing" --exp_name mercury_dancing
The outputs will be the final .mp4 video, stylized .obj files, colored render views, and screenshots during training.
Citation
If you find our code or paper helps, please consider citing:
@inproceedings{youwang2022clipactor,
title={CLIP-Actor: Text-Driven Recommendation and Stylization for Animating Human Meshes},
author={Kim Youwang and Kim Ji-Yeon and Tae-Hyun Oh},
year={2022},
booktitle={ECCV}
}
Acknowledgement
This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP)
grant funded by the Korea government(MSIT) (No.2022-00164860, Development of Human Digital Twin Technology Based on Dynamic Behavior Modeling and Human-Object-Space Interaction; and No.2021-0-02068, Artificial Intelligence Innovation Hub).
The implementation of CLIP-Actor is largely inspired and fine-tuned from the seminal prior work, Text2Mesh (Michael et al.).
We thank the authors of Text2Mesh who made their code public. Also If you find these works helpful, please consider citing them as well.
About
[ECCV'22] Official PyTorch Implementation of "CLIP-Actor: Text-Driven Recommendation and Stylization for Animating Human Meshes"