Randomization matters How to defend against strong adversarial attacks

Authors: Rafael Pinot, Raphael Ettedgui, Geovani Rizk, Yann Chevaleyre, Jamal Atif

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results validate our theoretical analysis, and show that our defense method considerably outperforms Adversarial Training against strong adaptive attacks, by achieving 0.55 accuracy under adaptive PGD-attack on CIFAR10, compared to 0.42 for Adversarial training. 6. Experiments: How to build the mixture Table 1. Evaluation on CIFAR10 and CIFAR100 without data augmentation.
Researcher Affiliation Academia 1Université Paris-Dauphine, PSL Research University, CNRS, LAMSADE, Paris, France 2Institut LIST, CEA, Université Paris-Saclay, France.
Pseudocode Yes Algorithm 1 Boosted Adversarial Training
Open Source Code No The paper does not provide any specific links to source code or explicitly state that the code for their methodology is publicly available.
Open Datasets Yes We evaluate this method, in Section 6, against strong adaptive attacks on CIFAR10 and CIFAR100 datasets. For CIFAR10 an CIFAR100 datasets (Krizhevsky & Hinton, 2009)
Dataset Splits No The paper mentions using CIFAR10 and CIFAR100 datasets but does not explicitly provide details about the training, validation, and test splits (e.g., percentages, sample counts, or specific pre-defined splits).
Hardware Specification No The paper mentions accessing "HPC resources of IDRIS under the allocation 2020-101141 made by GENCI" but does not provide specific details on the hardware used, such as GPU or CPU models.
Software Dependencies No The paper does not provide specific software dependencies with version numbers, such as programming languages, libraries, or frameworks used for implementation.
Experiment Setup Yes We use ℓ -PGD with 20 iterations and ϵ = 0.031 to train the first classifier and to build D. For Adaptive-ℓ -PGD we use an epsilon equal to 8/255 ( 0.031), a step size equal to 2/255 ( 0.008) and we allow random initialization. For Adaptive-ℓ2-C&W we use a learning rate equal to 0.01, 9 binary search steps, the initial constant to 0.001, we allow the abortion when it has already converged and we give the results for the different values of rejection threshold ϵ2 {0.4, 0.6, 0.8}.