You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
E.T.Track - Efficient Visual Tracking with Exemplar Transformers
Official implementation of E.T.Track.
E.T.Track utilized our proposed Exemplar Transformer, a transformer module utilizing a single instance level attention layer for realtime visual object tracking.
E.T.Track is up to 8x faster than other transformer-based models, and consistently outperforms competing lightweight trackers that can operate in realtime on standard CPUs.
Modify local.py.
Modify the path files for the evaluation in pytracking/evaluation/local.py
Evaluation
We evaluate our models using PyTracking.
The sequences comparing E.T.Track and LT-Mobile in the ''Video Visualizations'' section can be found here.
Add the correct dataset in pytracking/experiments/myexperiments.py (default: OTB-100)
Run python3 -m pytracking.run_experiment myexperiments et_tracker --threads 0
Citation
If you use this code, please consider citing the following paper:
@inproceedings{blatter2023efficient,
title={Efficient visual tracking with exemplar transformers},
author={Blatter, Philippe and Kanakis, Menelaos and Danelljan, Martin and Van Gool, Luc},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={1571--1581},
year={2023}
}
About
Efficient Visual Tracking with Exemplar Transformers [WACV2023]