Variational Model Inversion Attacks

Authors: Kuan-Chieh Wang, YAN FU, Ke Li, Ashish Khisti, Richard Zemel, Alireza Makhzani

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, our method substantially improves performance in terms of target attack accuracy, sample realism, and diversity on datasets of faces and chest X-ray images.
Researcher Affiliation Collaboration University of Toronto1, Vector Institute2, Simon Fraser University3
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code can be found at https://github.com/wangkua1/vmi.
Open Datasets Yes For the MNIST task, we used the letters split (i.e., handwritten English alphabets) of the EMNIST [Cohen et al., 2017] dataset as the auxiliary dataset. For Celeb A [Liu et al., 2015], we used the 1000 most frequent identities as the target dataset, and the rest as the auxiliary dataset. For Chest X-ray (CXR) [Wang et al., 2017], we used the 8 diseases outlined by Wang et al. [2017] as the target dataset, and randomly selected 50,000 images from the remaining as the auxiliary dataset.
Dataset Splits No Optimization hyperparameters were selected to maximize accuracy on a validation set of the private data. However, specific percentages or sample counts for training, validation, and test splits are not provided.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running experiments.
Software Dependencies No The paper mentions using Style GAN and other frameworks like deep normalizing flows, ResNets, Inception network, VGG, and Arc Face, but does not provide specific version numbers for any of these software dependencies.
Experiment Setup No The paper mentions that 'Optimization hyperparameters were selected to maximize accuracy on a validation set' but does not provide specific values for these hyperparameters or other training configurations in the main text.