You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a PyTorch implementation of our paper at ICCV2021:
Knowledge Enriched Distributional Model Inversion Attacks [paper] [arxiv]
We propose a novel 'Inversion-Specific GAN' that can better distill knowledge useful for performing attacks on private models from public data. Moreover, we propose to model a private data distribution for each target class which refers to 'Distributional Recovery'.
Requirement
This code has been tested with Python 3.6, PyTorch 1.0 and cuda 10.0.
Getting Started
Install required packages.
Download relevant datasets including Celeba, MNIST, CIFAR10.
--improved_flag indicates if an inversion-specfic GAN is used. If False, then a general GAN will be applied.
--dist_flag indicates if distributional recovery is performed. If False, then optimization is simply applied on a single sample instead of a distribution.
By setting both improved_flag and dist_flag be False, we are simply using the method proposed in [1].
Reference
[1]
Zhang, Yuheng, et al. "The secret revealer: Generative model-inversion attacks against deep neural networks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.