Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations
Authors: Amit Daniely, Hadas Shacham
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We made a small experiment on the MNIST data set (see https://github.com/hadasdas/L2AdversarialPerturbations). We normalized the examples to have a norm of 784 (784 is the dimension of the examples), and trained networks of depth 2-8, with 100 neurons at every hidden layer. We modified the classification task so that the network was trained to distinguish even from odd digits. We then sampled 1000 examples and sought adversarial example for each of them using GD. Figure 5 shows the histogram and average of the distances in which the adversarial examples were found. |
| Researcher Affiliation | Collaboration | Amit Daniely School of Computer Science and Engineering, The Hebrew University, Jerusalem, Israel and Google Research Tel-Aviv amit.daniely@mail.huji.ac.il Hadas Schacham School of Computer Science and Engineering, The Hebrew University, Jerusalem, Israel hadas.schacham@mail.huji.ac.il |
| Pseudocode | No | The paper presents theoretical proofs and mathematical derivations but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | We made a small experiment on the MNIST data set (see https://github.com/hadasdas/L2AdversarialPerturbations). |
| Open Datasets | Yes | We made a small experiment on the MNIST data set (see https://github.com/hadasdas/L2AdversarialPerturbations). |
| Dataset Splits | No | The paper mentions training on MNIST and sampling 1000 examples for seeking adversarial examples, but it does not provide specific training, validation, or test split percentages or sample counts for dataset partitioning. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware used to run the experiments (e.g., GPU models, CPU types, or memory specifications). |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | We normalized the examples to have a norm of 784 (784 is the dimension of the examples), and trained networks of depth 2-8, with 100 neurons at every hidden layer. We modified the classification task so that the network was trained to distinguish even from odd digits. |