You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you use this software in an academic article, please cite:
@inproceedings{nguyen2015deep,
title={Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images},
author={Nguyen, Anh and Yosinski, Jason and Clune, Jeff},
booktitle={Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on},
year={2015},
organization={IEEE}
}
g++ 4.9 (a C++ compiler compatible with C++11 standard)
Note: These are specific versions of the two frameworks with our additional work necessary to produce the images as in the paper. They are not the same as their master branches.
An MNIST experiment (Fig. 4, 5 in the paper) can be run directly on a local machine (4-core) within a reasonable amount of time (around ~5 minutes or less for 200 generations).
An ImageNet experiment needs to be run on a cluster environment. It took us ~4 days x 128 cores to run 5000 generations and produce 1000 images (Fig. 8 in the paper).
To reproduce the gradient ascent fooling images (Figures 13, S3, S4, S5, S6, and S7 from the paper), see the documentation in the caffe/ascent directory. You'll need to use the ascent branch instead of master, because the two required versions of Caffe are different.
Updates
Our fork project here has support for the latest Caffe and experiments to create recognizable images instead of unrecognizable.
License
Please refer to the licenses of Sferes and Caffe projects.
About
Code base for "Deep Neural Networks are Easily Fooled" CVPR 2015 paper