Adversarially Robust Learning with Uncertain Perturbation Sets

Authors: Tosca Lechner, Vinayak Pathak, Ruth Urner

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical While prior literature has studied both scenarios with completely known and completely unknown perturbation sets, we propose an in-between setting of learning with respect to a class of perturbation sets. We show that in this setting we can improve on previous results with completely unknown perturbation sets, while still addressing the concerns of not having perfect knowledge of these sets in real life. In particular, we give the first positive results for the learnability of infinite Littlestone classes when having access to a perfect-attack oracle. We also consider a setting of learning with abstention, where predictions are considered robustness violations, only when the wrong label prediction is made within the perturbation set. In Section 3 we analyze perturbation type classes for which this order is a total order. Theorem 1 states that every hypothesis class with finite VC dimension can be robustly learned in the realizable case with respect to any totally ordered perturtbation type class. We then prove that this result cannot be extended to the agnostic case (Observation 1).
Researcher Affiliation Collaboration Tosca Lechner University of Waterloo, Waterloo, Ontario, Canada, tlechner@uwaterloo.ca Vinayak Pathak Layer 6 AI, Toronto, Ontario, Canada, vinayak@layer6.ai Ruth Urner York University, Toronto, Ontario, Canada, uruth@eecs.yorku.ca
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating the release of open-source code for the described methodology.
Open Datasets No The paper is theoretical and does not conduct experiments on specific datasets. It defines a theoretical 'data generation as a distribution P over X Y' but does not specify a concrete, publicly available dataset used for training.
Dataset Splits No The paper is theoretical and does not describe empirical experiments or data splits for validation.
Hardware Specification No The paper is theoretical and does not describe any computational experiments, thus no hardware specifications are provided.
Software Dependencies No The paper is theoretical and does not describe any computational experiments or list specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe any empirical experimental setup details or hyperparameters.