You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is the PyTorch implementation for: Trajectory-guided Motion Perception for Facial Expression Quality Assessment in Neurological Disorders
arXiv version
We introduce Trajectory-guided Motion Perception Transformer (TraMP-Former), a novel FEQA framework that fuses landmark trajectory features for fine-grained motion capture with visual semantic cues from RGB frames, ultimately regressing the combined features into a quality score.
Use the above link to access the video frames of the PFED5 dataset.
You can download the corresponding trajectory data separately from the provided source.
For the augmented version of this dataset, the list of start and end frame indices used for splitting is available in data/Toronto_NeuroFace_split.csv.
Training and Testing on PFED5
run python main_rgb1x1_128.py --class_idx 1 --batch_size 4
Pretrained Weights
Download pretrained weights (RGB encoder and trajectory encoder) from Google Drive. Put the files under models/pretrained_weights folder.
If you find our work useful in your research, please consider giving it a star ⭐ and citing our paper in your work:
@inproceedings{tramp-former,
title={Trajectory-guided Motion Perception for Facial Expression Quality Assessment in Neurological Disorders},
author={Shuchao Duan and Amirhossein Dadashzadeh and Alan Whone and Majid Mirmehdi},
booktitle={2025 19th IEEE international conference on automatic face and gesture recognition (FG)},
year={2025},
organization={IEEE}
}
Acknowledgement
We gratefully acknowledge the contribution of the Parkinson’s study participants. The clinical trial from which the video data of the people with Parkinson’s was sourced was funded by Parkinson’s UK (Grant J-1102), with support from Cure Parkinson’s. Portions of the research here uses the Toronto NeuroFace Dataset collected by Dr. Yana Yunusova and the Vocal Tract Visualization and Bulbar Function Lab teams at UHN-Toronto Rehabilitation Institute and Sunnybrook Research Institute respectively, financially supported by the Michael J. Fox Foundation, NIH-NIDCD, Natural Sciences and Engineering Research Council, Heart and Stroke Foundation Canadian Partnership for Stroke Recovery and AGE-WELL NCE.
About
[IEEE FG'25]Trajectory-guided Motion Perception for Facial Expression Quality Assessment in Neurological Disorders