Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization

Authors: Omar Montasser, Steve Hanneke, Nati Srebro

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We present a minimax optimal learner for the problem of learning predictors robust to adversarial examples at test-time. Interestingly, we find that this requires new algorithmic ideas and approaches to adversarially robust learning. In particular, we show, in a strong negative sense, the suboptimality of the robust learner proposed by [21] and a broader family of learners we identify as local learners. Our results are enabled by adopting a global perspective, specifically, through a key technical contribution: the global one-inclusion graph, which may be of independent interest, that generalizes the classical one-inclusion graph due to [17]. Finally, as a byproduct, we identify a dimension characterizing qualitatively and quantitatively what classes of predictors H are robustly learnable. This resolves an open problem due to [21], and closes a (potentially) infinite gap between the established upper and lower bounds on the sample complexity of adversarially robust learning. and If you ran experiments... [N/A] (a) Did you include the code, data, and instructions needed to reproduce the main experi-mental results (either in the supplemental material or as a URL)? [N/A]
Researcher Affiliation Academia Omar Montasser TTI-Chicago omar@ttic.edu Steve Hanneke Purdue University steve.hanneke@gmail.com Nathan Srebro TTI-Chicago nati@ttic.edu
Pseudocode Yes Algorithm 1: Converting an Orientation O of GUH to a Learner AO.
Open Source Code No If you are including theoretical results... [Yes] (a) Did you include the code, data, and instructions needed to reproduce the main experi-mental results (either in the supplemental material or as a URL)? [N/A]
Open Datasets No The paper is theoretical and does not involve empirical experiments using datasets. Therefore, it does not mention a training dataset or its accessibility.
Dataset Splits No The paper is theoretical and does not involve empirical experiments. Therefore, it does not provide specific dataset split information (training, validation, test) for reproduction.
Hardware Specification No The paper is theoretical and does not report on experiments. It explicitly states '[N/A]' for hardware specifications in its reproducibility statement.
Software Dependencies No The paper is theoretical and does not report on experiments. It does not provide specific ancillary software details with version numbers.
Experiment Setup No The paper is theoretical and does not report on experiments. Therefore, it does not contain specific experimental setup details such as hyperparameter values or training configurations.