The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise

Authors: Ilias Diakonikolas, Daniel M. Kane, Pasin Manurangsi

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent agnostic PAC model, with a focus on Lp perturbations. We give a computationally efficient learning algorithm and a nearly matching computational hardness result for this problem. An interesting implication of our findings is that the L1 perturbations case is provably computationally harder than the case 2 p < 1.
Researcher Affiliation Collaboration Ilias Diakonikolas University of Wisconsin, Madison ilias@cs.wisc.edu Daniel M. Kane University of California, San Diego dakane@cs.ucsd.edu Pasin Manurangsi Google Research, Mountain View pasin@google.com
Pseudocode Yes Figure 1: Hardness Reduction from Label Cover to L1-margin Halfspace Learning. Here we use ei to denote the i-th vector in the standard basis (i.e. the vector with value one in the i-th coordinate and zero in the remaining coordinates). Furthermore, we extend this notation and use e S, for a set S of coordinates, to denote the indicator vector for S, i.e. e S = P i2S ei.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the methodology described.
Open Datasets No The paper is theoretical and focuses on computational complexity and algorithms. It does not conduct empirical experiments with real-world datasets, nor does it specify any publicly available datasets for training purposes. It refers to theoretical 'distributions' and 'multisets of samples'.
Dataset Splits No The paper is theoretical and does not conduct empirical experiments that would involve training, validation, or test dataset splits. Therefore, no specific dataset split information is provided.
Hardware Specification No The paper is theoretical and focuses on algorithm design and computational complexity analysis. It does not conduct empirical experiments, and therefore, no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and discusses algorithms and complexity bounds. It does not mention any specific software or library dependencies with version numbers, as there are no empirical experiments described that would require them.
Experiment Setup No The paper is theoretical and presents algorithms and complexity analysis rather than empirical experiments. No details regarding experimental setup, such as hyperparameters or training configurations, are provided.