Backpropagating Linearly Improves Transferability of Adversarial Examples

Authors: Yiwen Guo, Qizhang Li, Hao Chen

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that this simple yet effective method obviously outperforms current state-of-the-arts in crafting transferable adversarial examples on CIFAR-10 and Image Net, leading to more effective attacks on a variety of DNNs.
Researcher Affiliation Collaboration Yiwen Guo Byte Dance AI Lab guoyiwen.ai@bytedance.com Qizhang Li Byte Dance AI Lab liqizhang@bytedance.com Hao Chen University of California, Davis chen@ucdavis.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code at: https://github.com/qizhangli/linbp-attack.
Open Datasets Yes We focus on untargeted ℓ attacks on deep image classifiers. Different methods are compared on CIFAR-10 [27] and Image Net [41]...
Dataset Splits No The paper mentions using 5000 test instances for evaluation and refers to test sets, but it does not explicitly provide details about a validation dataset split or its purpose for hyperparameter tuning.
Hardware Specification Yes All experiments are performed on an NVIDIA V100 GPU with code implemented using Py Torch [39].
Software Dependencies No The paper mentions that code was implemented using 'Py Torch [39]' but does not provide a specific version number for PyTorch or any other software dependency.
Experiment Setup Yes On both datasets, we set the maximum perturbation as ϵ = 0.1, 0.05, 0.03 to keep inline with ILA. ... we run for 100 iterations on CIFAR-10 inputs and 300 iterations on Image Net inputs with a step size of 1/255 such that its performance reaches plateaus on both datasets.