You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you find the code and pre-trained models useful in your research, please consider citing:
@inproceedings{Zhang-ECCV-2016,
author = {Zhang, Shun and Gong, Yihong and Huang, Jia-Bin and Lim, Jongwoo and Wang, Jinjun and Ahuja, Narendra and Yang, Ming-Hsuan},
title = {Tracking Persons-of-Interest via Adaptive Discriminative Features},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {415-433}
}
# We call the root directory of the project code `AFL_ROOT`.
cd $AFL_ROOT/external/caffe-Triplet-New
make all -j8
make pycaffe
make matcaffe
Download the T-ara images and extract all images into AFL/data/Tara.
Download the AlexNet model:
cd $AFL_ROOT/external/caffe-Triplet-New
scripts/download_model_binary.py models/bvlc_reference_caffenet
Download the VGG-Face Model and put it in $AFL_ROOT/external/caffe-Triplet-New/models/VGG. Download the pre-trained face model and put it in $AFL_ROOT/external/caffe-Triplet-New/models/pretrained_web_face.
Usage
Directly run the script run_Tara_example.sh.
Or run the following commands step by step:
Mine constraints:
cd $AFL_ROOT
# Start MATLAB
matlab
>> genTracklet('Tara')
Learn adaptive discriminative features:
cd $AFL_ROOT
sh shell_scripts/Tara/adapt_Triplet.sh
Extract features:
sh shell_scripts/Tara/extract_All_Feas.sh
Perform hierarchical agglomerative clustering algorithm (you can get Fig. 6(a) in our supplementary materials):