Adversarial Robustness Guarantees for Random Deep Neural Networks
Authors: Giacomo De Palma, Bobak Kiani, Seth Lloyd
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We explore the properties of adver sarial examples for deep neural networks with random weights and biases, and prove that for any p 1, the p distance of any given input from the classifcation boundary scales as one over the square root of the dimension of the input times the p norm of the input. The results are based on the recently proved equivalence between Gaus sian processes and deep neural networks in the limit of infnite width of the hidden layers, and are validated with experiments on both random deep neural networks and deep neural networks trained on the MNIST and CIFAR10 datasets. |
| Researcher Affiliation | Academia | 1Scuola Normale Superiore, Pisa, Italy 2Department of Me chanical Engineering, MIT, Cambridge MA, USA 3Research Lab oratory of Electronics, MIT, Cambridge MA, USA 4Department of Electrical Engineering & Computer Science, MIT, Cam bridge MA, USA. |
| Pseudocode | No | The paper describes network architectures with mathematical expressions but does not include any blocks labeled 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | 2code to replicate experiments published at https://github.com/bkiani/Adversarial-robustness-guarantees-for random-deep-neural-networks |
| Open Datasets | Yes | We extend our experimental analysis to networks trained on MNIST and CIFAR10 data. |
| Dataset Splits | No | The paper mentions 'training set accuracy' and 'training and test data' but does not explicitly state the dataset split percentages or sample counts for training, validation, and testing, nor does it explicitly mention a validation set split. |
| Hardware Specification | No | The paper does not specify any particular hardware used for running the experiments (e.g., specific GPU or CPU models). |
| Software Dependencies | No | The paper does not list specific software dependencies with their version numbers required to replicate the experiments. |
| Experiment Setup | Yes | Networks were trained for 15 and 25 epochs for the MNIST and CIFAR10 datasets respectively achieving greater than 98% training set accuracy in all cases. We refer to the Supplementary Manuscript for full details on the training of the networks. |