Pseudo-Private Data Guided Model Inversion Attacks
Authors: Xiong Peng, Bo Han, Feng Liu, Tongliang Liu, Mingyuan Zhou
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, through extensive experimentation, we demonstrate that our solution significantly improves the performance of the SOTA MI methods across various settings, including white-box, black-box, and label-only MIAs (Sec. 4). |
| Researcher Affiliation | Academia | Xiong Peng1 Bo Han1 Feng Liu2 Tongliang Liu3 Mingyuan Zhou4 1TMLR Group, Department of Computer Science, Hong Kong Baptist University 2School of Computing and Information Systems, The University of Melbourne 3Sydney AI Centre, The University of Sydney 4Mc Combs School of Business, The University of Texas at Austin |
| Pseudocode | Yes | B The Algorithmic Realizations of PPDG-MI This section presents the detailed algorithmic realization of the pseudo-private data guided model inversion (PPDG-MI) method. We describe two variants of the PPDG-MI method, each tailored for different MI scenarios. The first variant utilizes vanilla tuning (cf. Alg. 1), applicable for lowresolution MIAs where the adversary trains a GAN from scratch. The second variant employs nuanced point-wise or batch-wise tuning (cf. Alg. 2), suitable for high-resolution MIAs (i.e., PPA) where pre-trained generators are provided without access to the original training details. |
| Open Source Code | Yes | Our source code is available at: https://github.com/tmlr-group/PPDG-MI. |
| Open Datasets | Yes | In line with existing MIA literature on face recognition, we use the Celeb A [Liu et al., 2015], Face Scrub [Ng and Winkler, 2014], and FFHQ datasets [Karras et al., 2019]. |
| Dataset Splits | No | The paper discusses the division of datasets into 'private training dataset Dprivate' and 'public auxiliary dataset Dpublic' but does not specify explicit train/validation/test splits for the data used directly in their model inversion experiments or for evaluating their PPDG-MI method beyond what is implicitly used by the target models themselves. |
| Hardware Specification | Yes | In our experiments with Plug & Play Attacks (PPA), we conducted all of them on Oracle Linux Server 8.9 using NVIDIA Ampere A100-80G GPUs. ... For MIAs targeting low-resolution facial recognition tasks, we executed these experiments on Ubuntu 20.04.4 LTS, equipped with NVIDIA Ge Force RTX 3090 GPUs. |
| Software Dependencies | Yes | The hardware operated under CUDA 11.7, Python 3.9.18, and Py Torch 1.13.1. ... This setup utilized CUDA 11.6, Python 3.7.12, and Py Torch 1.13.1. |
| Experiment Setup | Yes | All models are trained for 100 epochs using the Adam optimizer [Kingma and Ba, 2015], with an initial learning rate of 10^-3 and beta = (0.9, 0.999), and a weight decay of 10^-3. We reduce the learning rate by a factor of 0.1 after 75 and 90 epochs. The batch size is set to 128. ... During pre-attack latent code selection, we choose 100 candidates for each target identity from a search space of 500 latent codes for both Celeb A and Face Scrub. ... samples are optimized for 70 steps for both Celeb A and Face Scrub in the baseline attack and 35 steps for each round of MIA in PPDG-MI. |