Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks

Authors: Pranjal Awasthi, Natalie Frank, Mehryar Mohri

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We give upper and lower bounds for the adversarial empirical Rademacher complexity of linear hypotheses with adversarial perturbations measured in lr-norm for an arbitrary r 1. We then extend our analysis to provide Rademacher complexity lower and upper bounds for a single Re LU unit. Finally, we give adversarial Rademacher complexity bounds for feed-forward neural networks with one hidden layer. We provide a brief sketch of the proof of Theorem 4 and provide the details in Appendix B.
Researcher Affiliation Collaboration 1Google Research and Rutgers University 2Courant Institute of Math. Sciences 3Google Research and Courant Institute of Math. Sciences.
Pseudocode No The paper is theoretical and focuses on mathematical proofs and bounds, therefore it does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code for the described methodology, nor does it include links to code repositories.
Open Datasets No The paper is theoretical and does not describe experiments performed on specific publicly available datasets. It refers to abstract 'samples' but not concrete datasets.
Dataset Splits No The paper is theoretical and does not conduct experiments, thus it does not specify any training, validation, or test dataset splits.
Hardware Specification No The paper is purely theoretical and does not describe any experimental setup or hardware used for computation.
Software Dependencies No The paper is theoretical and does not describe any computational implementation or software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not conduct experiments, thus it does not provide details about an experimental setup, hyperparameters, or training configurations.