On the Hardness of Robust Classification

Authors: Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska, James Worrell

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper we study the feasibility of robust learning from the perspective of computational learning theory, considering both sample and computational complexity.
Researcher Affiliation Academia Pascale Gourdeau University of Oxford pascale.gourdeau@cs.ox.ac.uk Varun Kanade University of Oxford varunk@cs.ox.ac.uk Marta Kwiatkowska University of Oxford marta.kwiatkowska@cs.ox.ac.uk James Worrell University of Oxford james.worrell@cs.ox.ac.uk
Pseudocode No The paper contains mathematical definitions, lemmas, theorems, and proofs, but no pseudocode or explicitly labeled algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets No The paper discusses theoretical concepts of sampling and distributions (e.g., 'polynomial-size sample from the unknown distribution', 'uniform distribution on {0, 1}n') within its proofs, but does not use or provide concrete access information for publicly available datasets for empirical training.
Dataset Splits No The paper is theoretical and does not conduct experiments that would require training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not describe any experiments, therefore no hardware specifications are provided.
Software Dependencies No The paper is theoretical and does not describe any experiments that would require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe any experiments or their setup, thus no hyperparameter values or training configurations are provided.