Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints

Authors: Maura Pintor, Fabio Roli, Wieland Brendel, Battista Biggio

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that FMN significantly outperforms existing ℓ0, ℓ1, and ℓ -norm attacks in terms of perturbation size, convergence speed and computation time, while reporting comparable performances with state-of-the-art ℓ2-norm attacks. Our open-source code is available at: https://github.com/pralab/Fast-Minimum-Norm-FMN-Attack. We report here an extensive experimental analysis involving several state-of-the-art defenses and minimum-norm attacks, covering ℓ0, ℓ1, ℓ2 and ℓ norms.
Researcher Affiliation Collaboration Maura Pintor University of Cagliari, Italy Pluribus One, Italy maura.pintor@unica.it Fabio Roli University of Cagliari, Italy Pluribus One, Italy roli@unica.it Wieland Brendel Tübingen AI Center, University of Tübingen, Germany wieland.brendel@uni-tuebingen.de Battista Biggio University of Cagliari, Italy Pluribus One, Italy battista.biggio@unica.it
Pseudocode Yes Algorithm 1 Fast Minimum-norm (FMN) Attack
Open Source Code Yes Our open-source code is available at: https://github.com/pralab/Fast-Minimum-Norm-FMN-Attack.
Open Datasets Yes Datasets. We consider two commonly-used datasets for benchmarking adversarial robustness of deep neural networks, i.e., the MNIST handwritten digits and CIFAR10.
Dataset Splits No The paper mentions using 'a subset of 1000 test samples' for MNIST and CIFAR10, and '20 validation samples' for ImageNet hyperparameter tuning, but it does not provide explicit, comprehensive training/validation/test dataset splits (e.g., percentages, counts, or references to standard, reproducible splits for all datasets) needed to reproduce the experiments.
Hardware Specification Yes The average runtime per query for each attack-model pair, measured on a workstation with an NVIDIA GeForce RTX 2080 Ti GPU with 11GB of RAM, can be found in Table 2.
Software Dependencies Yes We use the implementation of FAB from Ding et al. [12], while for all the remaining attacks we use the implementation available in Foolbox [21, 22]. (Reference [12] specifies: G. W. Ding, L. Wang, and X. Jin. AdverTorch v0.1: An adversarial robustness toolbox based on pytorch. ar Xiv preprint ar Xiv:1902.07623, 2019.)
Experiment Setup Yes Hyperparameters. To ensure a fair comparison, we perform an extensive hyperparameter search for each of the considered attacks. We consider two main scenarios: tuning the hyperparameters at the sample-level and at the dataset-level. (...) For FMN, we run FMN for K = 1000 steps, using γ0 {0.05, 0.3}, γK = 10 4, and αK = 10 5. For ℓ0, ℓ1, and ℓ2, we set α0 {1, 5, 10}. For ℓ , we set α0 {101, 102, 103}...