You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Asymmetric Patch Sampling for Contrastive Learning
PyTorch implementation and pre-trained models for paper APS: Asymmetric Patch Sampling for Contrastive Learning.
APS is a novel asymmetric patch sampling strategy for contrastive learning, to further boost the appearance asymmetry for better representations. APS significantly outperforms the existing self-supervised methods on both ImageNet-1K and CIFAR dataset, e.g., 2.5% finetune accuracy improvement on CIFAR100. Additionally, compared to other self-supervised methods, APS is more efficient on both memory and computation during training.
conda create -n asp python=3.9
pip install -r requirements.txt
Datasets
Torchvision provides CIFAR10, CIFAR100 datasets. The root paths of data are respectively set to ./dataset/cifar10 and ./dataset/cifar100. ImageNet-1K dataset is placed at ./dataset/ILSVRC.
Pre-training
To start the APS pre-training, simply run the following commands.
• Arguments
arch is the architecture of the pre-trained models,you can choose vit-tiny, vit-small and vit-base.
dataset is the pre-trained dataset.
data-root is the path of the dataset.
nepoch is the pre-trained epochs.
Run APS with ViT-Small/2 network on a single node on CIFAR100 for 1600 epochs with the following command.
This project is under the CC-BY-NC 4.0 license. See LICENSE for details.
Citation
@article{shen2025asymmetric,
title={Asymmetric Patch Sampling for Contrastive Learning},
author={Shen, Chengchao and Chen, Jianzhong and Wang, Shu and Kuang, Hulin and Liu, Jin and Wang, Jianxin},
journal={Pattern Recognition},
year={2025}
}
About
The official implementation of "Asymmetric Patch Sampling for Contrastive Learning"