You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Analyze the embedding extracted from specific layers of models given adversarial examples via FGSM, PGD, MI-FGSM, and DeepFool algorithms.
Introduction
This project is to investigate and elucidate which layer in the popular deep learning model architecture that is most vulnerable to the adversarial examples and lead to misclassification.
Unzip the data in the folder of ./data/imagenette2/
Process the images to smaller size.
python3 process.py
Train (fine-tune) four models (ResNet-18, ResNet-50, DenseNet-121, Wide ResNet-50 v2)
./train.sh
Generate adversarial examples. There are five scripts provided for adversarial examples generation. You should consider the computing resource you have to decide the generation order to avoid cuda out of memory
To analyze the transferability of each kind of adversarial example. Please refer to Transferability.ipynb
To investigate the embedding of each layer, please refer to Analysis attack resnet18.ipynb, Analysis attack resnet50.ipynb, Analysis attack densenet121.ipynb, and Analysis attack wide resnet50.ipynb.
Several useful functions are provided for you to perform further analysis.
About
Analyze the embedding extracted from specific layers of models given adversarial examples via FGSM, PGD, MI-FGSM, and DeepFool algorithms.