You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[BMVC2022, IJCV2023, Best Student Paper, Spotlight] Official codes for the paper "In the Eye of Transformer: Global-Local Correlation for Egocentric Gaze Estimation".
[Update] This work was accepted by International Journal of Computer Vision (IJCV).
[Update] We won the Best Student Paper award of BMVC.
Introduction
Installation
Please find installation instructions in INSTALL.md. This repository is built based on SlowFast, so you can also refer to the instructions in SlowFast Installation.
You may follow the instructions in DATASET.md to prepare the datasets. Pretrained models on Kinetics can be downloaded here (The pretrained checkpoint is somehow unavailable. We also uploaded the pretrained MViT weights online).
Quick Start
Follow the example in GETTING_STARTED.md to start training your own model.
Pretrained Weights
We have released our pretrained GLC model with best performance on EGTEA and Ego4D. You can download via these links [EGTEA weights | Ego4D weights].
Citation
If you find our work useful in your research, please use the following BibTeX entry for citation.
@article{lai2022eye,
title={In the Eye of Transformer: Global-Local Correlation for Egocentric Gaze Estimation},
author={Lai, Bolin and Liu, Miao and Ryan, Fiona and Rehg, James},
journal={British Machine Vision Conference},
year={2022}
}
@article{lai2023eye,
title={In the eye of transformer: Global--local correlation for egocentric gaze estimation and beyond},
author={Lai, Bolin and Liu, Miao and Ryan, Fiona and Rehg, James M},
journal={International Journal of Computer Vision},
pages={1--18},
year={2023},
publisher={Springer}
}
About
[BMVC2022, IJCV2023, Best Student Paper, Spotlight] Official codes for the paper "In the Eye of Transformer: Global-Local Correlation for Egocentric Gaze Estimation".