You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are four datasets attached, corresponding to the four datasets discussed in section 3 of the paper:
robust_CIFAR: A dataset containing only the features relevant to a robust model, whereon standard (non-robust) training yields good robust accuracy
non_robust_CIFAR: A dataset containing only the features relevant to a natural model---the images do not look semantically related to the labels, but the dataset suffices for good test-set generalization
drand_CIFAR: A dataset consisting of adversarial examples on a natural model towards a random class and labeled as the random class. The only features that should be useful on this training set are non-robust features of the true dataset, so training on this gives good standard accuracy.
ddet_CIFAR: A dataset consisting of adversarial examples on a natural model towards a deterministic target class (y+1 mod C) and labeled as the target class. On the training set, both robust and non-robust features are useful, but robust features actually hurt generalization on the true dataset (instead they support generalization on an (x, y+1)) dataset.
Results
In our paper, we use fairly standard hyperparameters (Appendix C.2) and get the following accuracies (robust accuracy is given for l2 eps=0.25 examples):
@inproceedings{ilyas2019adversarial,
title = {Adversarial Examples are not Bugs, They Are Features},
author = {Andrew Ilyas and Shibani Santurkar and Dimitris Tsipras and Logan Engstrom and Brandon Tran and Aleksander Madry},
booktitle = {ArXiv preprint arXiv:1905.02175},
year = {2019}
}
Independent Reproductions
(Not checked for correctness by the paper authors)