Reducing Adversarially Robust Learning to Non-Robust PAC Learning
Authors: Omar Montasser, Steve Hanneke, Nati Srebro
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We study the problem of reducing adversarially robust learning to standard PAC learning, i.e. the complexity of learning adversarially robust predictors using access to only a black-box non-robust learner. We give a reduction that can robustly learn any hypothesis class C using any non-robust learner A for C. The number of calls to A depends logarithmically on the number of allowed adversarial perturbations per example, and we give a lower bound showing this is unavoidable. |
| Researcher Affiliation | Academia | Omar Montasser omar@ttic.edu Steve Hanneke steve.hanneke@gmail.com Nathan Srebro nati@ttic.edu Toyota Technological Institute at Chicago |
| Pseudocode | Yes | Algorithm 1: Robustify The Non-Robust |
| Open Source Code | No | The paper does not provide any links to open-source code or state that code will be released. |
| Open Datasets | No | The paper is theoretical and does not involve empirical training on specific datasets. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical validation on specific datasets, hence no training/validation/test splits are mentioned. |
| Hardware Specification | No | The paper is theoretical and does not mention any hardware specifications used for experiments. |
| Software Dependencies | No | The paper is theoretical and does not mention specific software dependencies with version numbers for experiments. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with hyperparameters or training settings. |