The official implementation for ICCV 2025 CompressTracker General Compression Framework for Efficient Transformer Object Tracking.
[Jun. 25, 2025]
- CompressTracker is accepted by ICCV 2025!
[Oct. 16, 2024]
- Code is available now!
[Sep. 28, 2022]
- We released our CompressTracker.
Our CompressTracker can be applied to any transformer tracking models. Moreover, CompressTracker supports any arbitrary levels of compression.
| Tracker | GOT-10K (AO) | LaSOT (AUC) | TrackingNet (AUC) | UAV123(AUC) |
|---|---|---|---|---|
| OSTrack-384 | 73.7 | 71.1 | 83.9 | 70.7 |
| OSTrack-256 | 71.0 | 69.1 | 83.1 | 68.3 |
Our CompressTracker only needs an end-to-end and simple training instead of multi-stage distillation in MixFormerV2. The training cost is much lower than MixFormerV2.
Our compressTracker achieves better trade-off between performance and speed.
Option1: Use the Anaconda (CUDA 11.7)
conda create -n compresstracker python=3.9
conda activate compresstracker
bash install.sh
Option2: Use the Anaconda (CUDA 11.7)
conda env create -f compresstracker_env.yaml
Run the following command to set paths for this project
python tracking/create_default_local_file.py --workspace_dir . --data_dir ./data --save_dir ./output
After running this command, you can also modify paths by editing these two files
lib/train/admin/local.py # paths about training
lib/test/evaluation/local.py # paths about testing
Put the tracking datasets in ./data. It should look like this:
${PROJECT_ROOT}
-- data
-- lasot
|-- airplane
|-- basketball
|-- bear
...
-- got10k
|-- test
|-- train
|-- val
-- coco
|-- annotations
|-- images
-- trackingnet
|-- TRAIN_0
|-- TRAIN_1
...
|-- TRAIN_11
|-- TEST
Download pre-trained MAE ViT-Base weights and put it under $PROJECT_ROOT$/pretrained_models. Besides, please download the pretrained OSTrack weights and put it under $PROJECT_ROOT$/pretrained_models, too.
python tracking/train.py --script compresstracker --config compresstracker_vitb_256_4 --save_dir ./output --mode multiple --nproc_per_node 8
Replace --config with the desired model config under experiments/compresstracker.
It is worth noting that our CompressTracker can support any structure, any resolution, and any level of compression. We provide the code for CompressTracker-2/3/4/6 in our paper, and you can easily modify it to compress your own model.
Some testing examples:
- LaSOT or other off-line evaluated benchmarks (modify
--datasetcorrespondingly)
python tracking/test.py compresstracker compresstracker_vitb_256_4 --dataset lasot --threads 64 --num_gpus 8
python tracking/analysis_results.py # need to modify tracker configs and names
- TrackingNet
python tracking/test.py compresstracker compresstracker_vitb_256_4 --dataset trackingnet --threads 64 --num_gpus 8
python lib/test/utils/transform_trackingnet.py --tracker_name compresstracker --cfg_name compresstracker_vitb_256_4
Note: The speeds reported in our paper were tested on a single RTX2080Ti GPU.
# Profiling vitb_256_mae_ce_32x4_ep300
python tracking/profile_model.py --script compresstracker --config compresstracker_vitb_256_4
- Thanks for the OSTrack, PyTracking library, MixFormerV2, and OneTracker, which helps us to quickly implement our ideas.
- We use the implementation of the ViT from the Timm repo.
If our work is useful for your research, please consider citing:
@inproceedings{hong2025general,
title={General compression framework for efficient transformer object tracking},
author={Hong, Lingyi and Li, Jinglun and Zhou, Xinyu and Yan, Shilin and Guo, Pinxue and Jiang, Kaixun and Chen, Zhaoyu and Gao, Shuyong and Li, Runze and Sheng, Xingdong and others},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={13427--13437},
year={2025}
}