AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows

Authors: Hadi Mohaghegh Dolatabadi, Sarah Erfani, Christopher Leckie

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Also, our experimental results show competitive performance of the proposed approach with some of the existing attack methods on defended classifiers.
Researcher Affiliation Academia Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie School of Computing and Information Systems The University of Melbourne Parkville, Victoria, Australia hadi.mohagheghdolatabadi@student.unimelb.edu.au
Pseudocode Yes Algorithm 1 in Appendix D.1 summarizes our black-box attack method.
Open Source Code Yes The code is available at https://github.com/hmdolatabadi/Adv Flow.
Open Datasets Yes we train target classifiers on CIFAR-10 [30] and SVHN [40] datasets. The classifier is a VGG19 [50] trained to detect smiles in Celeb A [36] faces.
Dataset Splits No The paper mentions using 10% of data to train adversarial attack detectors, but does not specify the validation split for the main target classifiers, which are trained on CIFAR-10 and SVHN datasets without explicit validation set details.
Hardware Specification No This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200. While a facility type is mentioned, specific hardware details like GPU/CPU models or memory amounts are not provided.
Software Dependencies No We also would like to thank the authors and maintainers of Py Torch [43], Num Py [17], and Matplotlib [21]. While these software packages are mentioned, specific version numbers required for reproducibility are not provided.
Experiment Setup Yes More details on the defense methods as well as attack hyperparameters can be found in Appendices B.3 and B.4. We use the Adam optimizer [26] with a learning rate of 0.01 for both N ATTACK and Adv Flow. We also use a weight decay of 0.001. The number of outer iterations for Adv Flow and N ATTACK is set to 2000. The population size of Adv Flow and N ATTACK is set to 20. For the attacks against CIFAR-10 and SVHN datasets, we set ϵmax = 8/255 and ϵmax = 16/255, respectively.