Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks

Authors: Lukas Struppek, Dominik Hintersdorf, Antonio De Almeida Correira, Antonia Adler, Kristian Kersting

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive evaluation confirms the improved robustness and flexibility of Plug & Play Attacks and their ability to create high-quality images revealing sensitive class characteristics. ... Our extensive evaluations in Sec. 5 demonstrate the high efficacy and robustness of our approach and that it also performs well under distributional shifts between datasets, whereas previous approaches fail to produce meaningful results.
Researcher Affiliation Academia 1Department of Computer Science, Technical University of Darmstadt, Germany. 2Universit at der Bundeswehr M unchen, Munich, Germany. 3Centre for Cognitive Science, TU Darmstadt, Germany. 4Hessian Center for AI (hessian.AI), Germany.
Pseudocode No The paper describes the attack pipeline and various components in detail (e.g., in Section 4 and Figure 2), but it does not include any explicitly labeled pseudocode blocks or algorithms.
Open Source Code Yes Our source code is publicly at https://github.com/Lukas Struppek/Plug-and-Play-Attacks to reproduce the experiments and facilitate further analysis on MIAs.
Open Datasets Yes We trained these models on Face Scrub (Ng & Winkler, 2014) and Celeb A (Liu et al., 2015) for facial image classification and Stanford Dogs (Khosla et al., 2011) for dog breed classification. ... The Celeb A dataset is available at https://mmlab.ie.cuhk.edu.hk/projects/Celeb A.html. ... The Stanford Dogs dataset is available at http://vision.stanford.edu/aditya86/Image Net Dogs. ... The FFHQ dataset and the licenses for individual images are available at https://github.com/NVlabs/ffhq-dataset. ... The AFHQ (Dog) dataset is available at https://github.com/clovaai/stargan-v2.
Dataset Splits No All datasets were split in the same way as for training the target models, using 90% of the samples for training and 10% for testing. ... We split each training dataset into 90% training data and 10% test data. The splits are identical for all target and evaluation models. While the paper specifies a train/test split, it does not explicitly define a separate 'validation' split of the overall dataset for model selection or hyperparameter tuning, beyond what might implicitly occur within the training set.
Hardware Specification Yes We performed all our experiments on NVIDIA DGX machines running NVIDIA DGX Server Version 5.1.0 and Ubuntu 20.04.2 LTS. The machines have 1.6TB of RAM and contain Tesla V100-SXM3-32GB-H GPUs and Intel Xeon Platinum 8174 CPUs. ... our approach needs about 5 minutes on a single GPU (Tesla V100-32GB).
Software Dependencies Yes We further relied on CUDA 11.4, Python 3.8.10, and Py Torch 1.10.0 with Torchvision 0.11.0 (Paszke et al., 2019) for our experiments.
Experiment Setup Yes All models were trained using the Adam optimizer (Kingma & Ba, 2015), with an initial learning rate of 0.001 and β = (0.9, 0.999). ... We multiplied the learning rate by a factor of 0.1 after 75 and 90 epochs. We trained the models for a total of 100 epochs with a batch size of 128. ... For attacking the Face Scrub and Celeb A models...we used a batch size of 20 and the Adam optimizer with a learning rate of 0.005 and β = (0.1, 0.1). For attacking the Face Scrub models...we optimized each batch for 50 epochs. For attacking the Celeb A models...we increased the number of iterations to 70.