ADef: an Iterative Algorithm to Construct Adversarial Deformations
Authors: Rima Alaifari, Giovanni S. Alberti, Tandri Gauksson
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate our results on MNIST with convolutional neural networks and on Image Net with Inception-v3 and Res Net-101. We evaluate the performance of ADef by applying the algorithm to classifiers trained on the MNIST (Le Cun) and Image Net (Russakovsky et al., 2015) datasets. Below, we briefly describe the setup of the experiments and in tables 1 and 2 we summarize their results. |
| Researcher Affiliation | Academia | Rima Alaifari Department of Mathematics ETH Zurich rima.alaifari@math.ethz.ch Giovanni S. Alberti Department of Mathematics University of Genoa alberti@dima.unige.it Tandri Gauksson Department of Mathematics ETH Zurich tandri.gauksson@math.ethz.ch |
| Pseudocode | Yes | Algorithm ADef Input: Classification model F, image x, correct label l, candidate labels k1, . . . , km Output: Deformed image y Initialize y x while K(y) = l do for j = 1, . . . , m do αj S Pc i=1 ( Fkj)i ( Fl)i yi τj Fkj (y) Fl(y) αj 2 ℓ2 Sαj end for i arg minj=1,...,m τj T y y (id + τi) end while return y |
| Open Source Code | Yes | Our implementation of the algorithm can be found at https://gitlab.math. ethz.ch/tandrig/ADef. |
| Open Datasets | Yes | We evaluate the performance of ADef by applying the algorithm to classifiers trained on the MNIST (Le Cun) and Image Net (Russakovsky et al., 2015) datasets. |
| Dataset Splits | Yes | We use ADef to produce adversarial deformations of the images in the test set. We apply ADef to pretrained Inception-v3 (Szegedy et al., 2016) and Res Net-101 (He et al., 2016) models to generate adversarial deformations for the images in the ILSVRC2012 validation set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, TensorFlow 2.x). |
| Experiment Setup | Yes | The algorithm is configured to pursue any label different from the correct label (all incorrect labels are candidate labels). It performs smoothing by a Gaussian filter of standard deviation 1/2, uses bilinear interpolation to obtain intermediate pixel intensities, and it overshoots by η = 2/10 whenever it converges to a decision boundary. It employs a Gaussian filter of standard deviation 1, bilinear interpolation, and an overshoot factor η = 1/10. for PGD we use 40 iterations, step size 1/100 and 3/10 as the maximum ℓ -norm of the perturbation. |