Alleviating Adversarial Attacks on Variational Autoencoders with MCMC

Authors: Anna Kuzina, Max Welling, Jakub Tomczak

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our approach on a variety of datasets (MNIST, Fashion MNIST, Color MNIST, Celeb A) and VAE configurations (β-VAE, NVAE, β-TCVAE), and show that our approach consistently improves the model robustness to adversarial attacks.
Researcher Affiliation Academia Anna Kuzina Vrije Universiteit Amsterdam a.kuzina@vu.nl Max Welling Universiteit van Amsterdam m.welling@uva.nl Jakub M. Tomczak Vrije Universiteit Amsterdam j.m.tomczak@vu.nl
Pseudocode Yes Algorithm 1 One Step of HMC.
Open Source Code Yes All implementation details and hyperparameters are included in the Appendix D and code repository 2. https://github.com/AKuzina/defend_vae_mcmc
Open Datasets Yes VAEs are trained on the MNIST, Fashion MNIST [44] and Color MNIST datasets. Following [13], we construct the Color MNIST dataset from MNIST by artificially coloring each image with seven colors (all corners of RGB cube except for black). We attack models trained on MNIST and Celeb A [29] datasets.
Dataset Splits Yes We use the standard train/validation/test splits of the datasets.
Hardware Specification No The paper states 'Experiments were carried out on the Dutch national e-infrastructure with the support of SURF Cooperative' but does not provide specific hardware details (e.g., GPU/CPU models, memory) used for the experiments.
Software Dependencies No The paper mentions 'All models are implemented in PyTorch [35]', but does not specify the version of PyTorch or any other software dependencies with version numbers.
Experiment Setup Yes All models are implemented in PyTorch [35] and trained for 100 epochs using the Adam optimizer [24] with a learning rate of 0.001. We use the projected gradient descent (PGD) with 50 steps... For optimization, we use the projected gradient descent with the number of iterations limited to 50 per point. We consider 0, 100, 500 and 1000 steps for this experiment.