Improving VAEs' Robustness to Adversarial Attack

Authors: Matthew JF Willetts, Alexander Camuto, Tom Rainforth, S Roberts, Christopher C Holmes

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We confirm their capabilities on several different datasets and with current state of the art VAE adversarial attacks, and also show that they increase the robustness of downstream tasks to attack.
Researcher Affiliation Academia 1University of Oxford 2Alan Turing Institute, London
Pseudocode No The paper contains mathematical derivations and proofs, but no structured pseudocode or algorithm blocks were found.
Open Source Code No The paper does not contain any explicit statement about providing open-source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes We carry our these attacks for d Sprites (Matthey et al., 2017), Chairs (Aubry et al., 2014) and 3D faces (Paysan et al., 2009), for a range of β and λ values.
Dataset Splits No The paper does not explicitly state the training, validation, and test dataset splits (e.g., percentages or sample counts) for the datasets used.
Hardware Specification No All runs were done on the Azure cloud system on NC6 GPU machines.
Software Dependencies No The paper mentions using ADAM for training but does not provide specific version numbers for any software libraries or dependencies used in the experiments.
Experiment Setup Yes To train the model we used ADAM Kingma & Lei Ba (2015) with default parameters, a cosine decaying learning rate of 0.001, and a batch size of 1024.