You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Then, you need to install the required packages including: Pytorch version 1.7.1,
torchvision version 0.8.2,
Timm version 0.5.4
and pyyaml. To install all these packages, simply run
pip3 install -r requirements.txt
Download and extract the ImageNet dataset to data folder. Suppose you're using
8 GPUs for training, then simply run
By default, all our config files have enabled the training with TransMix.
If you want to enable TransMix during the training of your own model,
you can add a --transmix in your training script. For example:
Or you can simply specify transmix: True in your yaml config file like what we did in deit_s_transmix.
To evaluate your model trained with TransMix, please refer to timm.
You can also find your validation accuracy during training.
Model Zoo
Coming soon!
Acknowledgement
This repository is built using the Timm library and
the DeiT repository.
License
This repository is released under the Apache 2.0 license as found in the LICENSE file.
Cite This Paper
If you find our code helpful for your research, please using the following bibtex to cite our paper:
@InProceedings{transmix,
title = {TransMix: Attend to Mix for Vision Transformers},
author = {Chen, Jie-Neng and Sun, Shuyang and He, Ju and Torr, Philip and Yuille, Alan and Bai, Song},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022}
}
About
[CVPR 2022] This repository includes the official project for the paper: TransMix: Attend to Mix for Vision Transformers.