Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network

Authors: Xiaojian Yuan, Kejiang Chen, Jie Zhang, Weiming Zhang, Nenghai Yu, Yang Zhang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our PLG-MI attack significantly improves the attack success rate and visual quality for various datasets and models, notably, 2 3 better than state-of-the-art attacks under large distributional shifts. Our code is available at: https://github.com/Lethe Sec/PLG-MI-Attack.
Researcher Affiliation Collaboration Xiaojian Yuan1, Kejiang Chen*1, Jie Zhang1,2, Weiming Zhang1, Nenghai Yu1, Yang Zhang3 1University of Science and Technology of China, 2University of Waterloo, 3CISPA Helmholtz Center for Information Security
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at: https://github.com/Lethe Sec/PLG-MI-Attack.
Open Datasets Yes For face recognition, we select three widely used datasets for experiments: Celeb A (Liu et al. 2015), FFHQ (Karras, Laine, and Aila 2019) and Face Scrub (Ng and Winkler 2014). ... More experiments on MNIST (Le Cun et al. 1998), CIFAR10 and Chest X-Ray (Wang et al. 2017) can be found in the Appendix.
Dataset Splits No The paper describes how the dataset is split into private and public parts, and refers to "standard setting" and "disjoint parts", but it does not specify explicit train/validation/test splits with percentages or counts for reproducibility.
Hardware Specification No The paper does not explicitly describe the hardware specifications (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No To train the GAN, we used Adam optimizer with a learning rate of 0.0002, a batch size of 64 and β = (0, 0.9). The hyperparameter α in Eq. (3) is set to 0.2. In stage-2, we use the Adam optimizer with a learning rate of 0.1 and β = (0.9, 0.999). The input vector z of the generator is drawn from a zero-mean unit-variance Gaussian distribution. We randomly initialize z for 5 times and optimize each round for 600 iterations.
Experiment Setup Yes We used Adam optimizer with a learning rate of 0.0002, a batch size of 64 and β = (0, 0.9). The hyperparameter α in Eq. (3) is set to 0.2. In stage-2, we use the Adam optimizer with a learning rate of 0.1 and β = (0.9, 0.999). The input vector z of the generator is drawn from a zero-mean unit-variance Gaussian distribution. We randomly initialize z for 5 times and optimize each round for 600 iterations.