Adversarial Examples in Multi-Layer Random ReLU Networks
Authors: Peter Bartlett, Sebastien Bubeck, Yeshwanth Cherapanamjeri
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our proof shows that adversarial examples arise in these networks because the functions they compute are locally very similar to random linear functions. The main result is for networks with constant depth, but we also show that some constraint on depth is necessary for a result of this kind, because there are suitably deep networks that, with constant probability, compute a function that is close to constant. In this paper, we prove that adversarial examples also arise in deep Re LU networks with random weights for a wide variety of network architectures those with constant depth and polynomially-related widths. The following theorem is the main result of the paper. Theorem 1.1. Fix 2 N. There are constants c1, c2, c3 that depend on for which the following holds. |
| Researcher Affiliation | Collaboration | Peter L. Bartlett Department of Electrical Engineering and Computer Science Department of Statistics UC Berkeley; Sébastien Bubeck Microsoft Research Redmond; Yeshwanth Cherapanamjeri Department of Electrical Engineering and Computer Science UC Berkeley |
| Pseudocode | No | The paper does not include any pseudocode or algorithm blocks. Its content is primarily mathematical proofs and theoretical analysis. |
| Open Source Code | No | The paper is theoretical and does not mention releasing any source code for the described methodology. No links to repositories or statements about code availability are present. |
| Open Datasets | No | The paper is theoretical and does not use or train on any datasets. No information about public dataset availability is provided. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments with datasets. Therefore, no training/test/validation dataset splits are discussed. |
| Hardware Specification | No | The paper is theoretical and does not describe any experiments requiring hardware. Therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is purely theoretical and does not mention any software dependencies with specific version numbers that would be required to replicate experiments. |
| Experiment Setup | No | The paper is theoretical and focuses on mathematical proofs, not empirical experiments. As such, there is no discussion of experimental setup details like hyperparameters or training configurations. |