You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is an implementation of Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots (Paper, Project Page)
The original paper used TED dataset, but, in this repository, we modified the code to use Talking With Hands 16.2M for GENEA Challenge 2022.
The model is also changed to estimate rotation matrices for upper-body joints instead of estimating Cartesian coordinates.
Environment
The code was developed using python 3.8 on Ubuntu 18.04. Pytorch 1.5.0 was used.
Prepare
Install dependencies
pip install -r requirements.txt
Download the FastText vectors from here and put crawl-300d-2M-subword.bin to the resource folder (resource/crawl-300d-2M-subword.bin).
Train
Make LMDB
cd scripts
python twh_dataset_to_lmdb.py [PATH_TO_DATASET]
Update paths and parameters in config/seq2seq.yml and run train.py
python train.py --config=../config/seq2seq.yml
Inference
Do training or use a pretrained model (output/train_seq2seq/baseline_icra19_checkpoint_100.bin). When you use the pretrained model, please put vocab_cache.pkl file into lmdb train path.
Inference. Output a BVH motion file from speech text (TSV file).
Result video for val_2022_v1_006.tsv by using the challenge visualization server.
val_2022_v1_006_generated.mp4
Remarks
I found this model was not successful when all the joints were considered, so I trained the model only with upper-body joints excluding fingers and used fixed values for remaining joints (using JointSelector in PyMo). You can easily try a different set of joints (e.g., full-body including fingers) by specifying joint names in target_joints variable in twh_dataset_to_lmdb.py. Please update data_mean and data_std in the config file if you change target_joints. You can find data mean and std values in the console output of the step 3 (Make LMDB) above.
License
Please see LICENSE.md
Citation
@INPROCEEDINGS{
yoonICRA19,
title={Robots Learn Social Skills: End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots},
author={Yoon, Youngwoo and Ko, Woo-Ri and Jang, Minsu and Lee, Jaeyeon and Kim, Jaehong and Lee, Geehyuk},
booktitle={Proc. of The International Conference in Robotics and Automation (ICRA)},
year={2019}
}
About
This is an implementation of Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots.