Rademacher Complexity for Adversarially Robust Generalization
Authors: Dong Yin, Ramchandran Kannan, Peter Bartlett
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate experimental results that validate our theoretical findings. |
| Researcher Affiliation | Academia | 1Department of Electrical Engineering and Computer Sciences, UC Berkeley, Berkeley, CA, USA 2Department of Statistics, UC Berkeley, Berkeley, CA, USA. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific links to source code for the methodology or include an explicit statement about the code being made available. |
| Open Datasets | Yes | Our experiments are implemented with Tensorflow (Abadi et al., 2016) on the MNIST dataset (Le Cun et al., 1998). |
| Dataset Splits | No | The paper mentions using training data and test data ('the training set of MNIST', 'the adversarial training and test error'), but it does not specify a separate validation set or detailed proportions for train/validation/test splits, which are necessary for full reproducibility of data partitioning. |
| Hardware Specification | No | The paper only states that 'Cloud computing resources are provided by AWS Cloud Credits for Research.' without providing any specific details on GPU models, CPU types, or other hardware components used for the experiments. |
| Software Dependencies | No | The paper mentions 'Tensorflow (Abadi et al., 2016)' but does not specify a version number for this or any other software dependencies, which is required for reproducible description. |
| Experiment Setup | Yes | In our first experiment, we vary the values of ϵ and λ, and for each (ϵ, λ) pair, we conduct 10 runs of the training algorithm, and in each run we sample the 1000 training data independently. Our training algorithm alternates between mini-batch stochastic gradient descent with respect to W and computing adversarial examples on the chosen batch in each iteration. |