You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 28, 2021. It is now read-only.
By combining large-scale adversarial training and feature-denoising layers,
we developed ImageNet classifiers with strong adversarial robustness.
Trained on 128 GPUs, our ImageNet classifier has 42.6% accuracy against an extremely strong
2000-steps white-box PGD targeted attack.
This is a scenario where no previous models have achieved more than 1% accuracy.
Our trained models, together with the evaluation script to verify their robustness.
We welcome attackers to attack our released models and defenders to compare with our released models.
Our distributed adversarial training code on ImageNet.
This project is under the CC-BY-NC 4.0 license. See LICENSE for details.
Citation
If you use our code, models or wish to refer to our results, please use the following BibTex entry:
@InProceedings{Xie_2019_CVPR,
author = {Xie, Cihang and Wu, Yuxin and van der Maaten, Laurens and Yuille, Alan L. and He, Kaiming},
title = {Feature Denoising for Improving Adversarial Robustness},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
About
ImageNet classifier with state-of-the-art adversarial robustness