EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Authors: Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on MNIST, CIFAR10 and Image Net show that EAD can yield a distinct set of adversarial examples with small L1 distortion and attains similar attack performance to the state-of-the-art methods in different attack scenarios. More importantly, EAD leads to improved attack transferability and complements adversarial training, suggesting novel insights on leveraging L1 distortion in adversarial machine learning and security implications of DNNs. |
| Researcher Affiliation | Collaboration | 1AI Foundations Lab, IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA 2The Cooper Union, New York, NY 10003, USA 3University of California, Davis, Davis, CA 95616, USA 4Tencent AI Lab, Bellevue, WA 98004, USA |
| Pseudocode | Yes | Algorithm 1 Elastic-Net Attacks to DNNs (EAD) |
| Open Source Code | Yes | Our EAD code is publicly available for download4. (Footnote 4: https://github.com/ysharma1126/EAD-Attack) |
| Open Datasets | Yes | Experimental results on MNIST, CIFAR10 and Image Net show that EAD can yield a distinct set of adversarial examples with small L1 distortion and attains similar attack performance to the state-of-the-art methods in different attack scenarios. |
| Dataset Splits | No | The paper references datasets like MNIST, CIFAR10, and Image Net which have standard splits, and mentions using "test sets", but does not explicitly provide the train/validation/test split percentages or sample counts within the paper itself for reproducibility of the data partitioning. It also states "The image classifiers for MNIST and CIFAR10 are trained based on the DNN models provided by Carlini and Wagner1", implying reliance on their splits, but does not detail them here. |
| Hardware Specification | Yes | All experiments are conducted on a machine with an Intel E5-2690 v3 CPU, 40 GB RAM and a single NVIDIA K80 GPU. |
| Software Dependencies | No | The paper mentions software like "Carlini and Wagner s framework", "ADAM" optimizer, "TensorFlow", and "Clever Hans package" but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | For both the EAD and C&W attacks, we use the default setting1, which implements 9 binary search steps on the regularization parameter c (starting from 0.001) and runs I = 1000 iterations for each step with the initial learning rate α0 = 0.01. ... In the experiments, we set the initial learning rate α0 = 0.01 with a square-root decay factor in k. ... Unless specified, we set the attack transferability parameter κ = 0 for both attacks. For I-FGM, we perform 10 FGM iterations (the default value) with ϵ-ball clipping. The distortion parameter ϵ in each FGM iteration is set to be ϵ/10. |