Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier

Authors: Connie Kou, Hwee Kuan Lee, Ee-Chien Chang, Teck Khim Ng

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 EXPERIMENTS AND DISCUSSION In the following section, we describe our experimental setup to evaluate the performance on clean and adversarial images with our distribution classifier method. ... Datasets and CNN networks: We use the MNIST (Le Cun et al., 1998), CIFAR10 and CIFAR100 (Krizhevsky & Hinton, 2009) datasets. ... Figure 5 (left) shows the test accuracies of the three transformation-based defenses with majority voting and with the three distribution classifiers.
Researcher Affiliation Academia Connie Kou1,2, Hwee Kuan Lee1,2,3,4, Ee-Chien Chang1, Teck Khim Ng1 1School of Computing, National University of Singapore 2Bioinformatics Institute, A*STAR Singapore 3Image and Pervasive Access Lab (IPAL), CNRS UMI 2955 4Singapore Eye Research Institute {koukl,changec,ngtk}@comp.nus.edu.sg, leehk@bii.a-star.edu.sg
Pseudocode No The paper describes methods and steps in prose and diagrams, but does not include any formally structured pseudocode or algorithm blocks.
Open Source Code No The paper states that 'The attacks are implemented using the Clever Hans library (Papernot et al., 2018)' but does not explicitly provide a link or statement confirming the release of their own source code for the proposed defense method.
Open Datasets Yes Datasets and CNN networks: We use the MNIST (Le Cun et al., 1998), CIFAR10 and CIFAR100 (Krizhevsky & Hinton, 2009) datasets.
Dataset Splits Yes The hyperparameter tuning for each defense is conducted on the validation sets.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or specific cloud instance types used for running the experiments.
Software Dependencies No The paper mentions using 'Clever Hans library' and 'Adam optimizer' but does not provide specific version numbers for these or any other ancillary software dependencies like Python, PyTorch/TensorFlow, or scikit-learn.
Experiment Setup Yes In the following section, we describe our experimental setup to evaluate the performance on clean and adversarial images with our distribution classifier method. ... Tables 1 to 3 show the hyperparameter settings used for the adversarial attacks. ... Tables 4 to 6 show the image transformation parameters used for MNIST and CIFAR10 respectively. ... The kernel width is optimized to be 0.05. ... The hyperparameters used are shown in Tables 8 to 10.