Adversarial examples from computational constraints

Authors: Sebastien Bubeck, Yin Tat Lee, Eric Price, Ilya Razenshteyn

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical First we prove that, for a broad set of classification tasks, the mere existence of a robust classifier implies that it can be found by a possibly exponential-time algorithm with relatively few training examples. Then we give two particular classification tasks where learning a robust classifier is computationally intractable. More precisely we construct two binary classifications task in high dimensional space which are (i) information theoretically easy to learn robustly for large perturbations, (ii) efficiently learnable (nonrobustly) by a simple linear separator, (iii) yet are not efficiently robustly learnable, even for small perturbations.
Researcher Affiliation Collaboration 1Microsoft Research, Redmond, Washington, USA 2University of Washington, Seattle, Washington, USA 3University of Texas, Austin, Texas, USA.
Pseudocode No The paper includes mathematical definitions and theorems but does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing source code or provide links to a code repository.
Open Datasets No The paper is theoretical and focuses on proofs and intractability results. While it mentions the 'CIFAR-10 dataset' in the context of discussing prior work (Madry et al., 2018), it does not use this or any other dataset for its own empirical training or analysis, nor does it provide access information for any dataset it hypothetically uses.
Dataset Splits No The paper is theoretical and does not conduct empirical experiments that would require explicit training/validation/test dataset splits. Therefore, no such information is provided.
Hardware Specification No The paper does not provide any specific details about hardware used for experiments, such as GPU models, CPU types, or cloud computing specifications. The paper is theoretical.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, or other libraries/solvers). The paper is theoretical.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with hyperparameters, training configurations, or system-level settings.