Towards Imperceptible and Robust Adversarial Example Attacks Against Neural Networks

Authors: Bo Luo, Yannan Liu, Lingxiao Wei, Qiang Xu

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate the efficacy of the proposed technique. and Experimental Evaluations Dataset. All the experiments are performed on MNIST and CIFAR10 datasets. ... DNN Model. For each dataset, we trained a model. ... Baselines. The baselines used in these experiments are three widely-used adversarial example attacks...
Researcher Affiliation Academia Bo Luo, Yannan Liu, Lingxiao Wei, Qiang Xu Department of Computer Science & Engineering The Chinese University of Hong Kong {boluo,ynliu,lxwei,qxu}@cse.cuhk.edu.hk
Pseudocode Yes Algorithm 1: The proposed algorithm to generate adversarial examples.
Open Source Code No The paper does not provide any concrete access information (link, explicit statement of release) to open-source code for the described methodology.
Open Datasets Yes Dataset. All the experiments are performed on MNIST and CIFAR10 datasets. The MNIST dataset (Le Cun, Cortes, and Burges 2010) includes 70000 gray scale hand-written digit images... The CIFAR10 dataset (Krizhevsky, Nair, and Hinton 2014) contains 6000 color images.
Dataset Splits No We perform adversarial example attacks against the testing set (10000 test images) in MNIST and CIFAR10 respectively. The paper mentions the testing set size but does not specify the training/validation splits or their sizes.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions deep neural networks and common functions but does not specify any software names with version numbers for reproducibility.
Experiment Setup Yes In our method, we select 20 pixels to add perturbations with a magnitude of 0.01 in each iteration.