Forster Decomposition and Learning Halfspaces with Noise

Authors: Ilias Diakonikolas, Daniel Kane, Christos Tzamos

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical As the main application of this result, we obtain the first polynomial-time algorithm for distribution-independent PAC learning of halfspaces in the Massart noise model with strongly polynomial sample complexity, i.e., independent of the bit complexity of the examples. Previous algorithms for this learning problem incurred sample complexity scaling polynomially with the bit complexity, even though such a dependence is not information-practically necessary.
Researcher Affiliation Academia Ilias Diakonikolas University of Wisconsin-Madison ilias@cs.wisc.edu Daniel M. Kane University of California, San Diego dakane@cs.ucsd.edu Christos Tzamos University of Wisconsin-Madison tzamos@wisc.edu
Pseudocode Yes Algorithm 1 Learning Algorithm for Massart Halfspaces
Open Source Code No The paper does not provide any statements or links indicating the availability of open-source code for the described methodology.
Open Datasets No The paper is theoretical and focuses on algorithmic development and analysis. It discusses conceptual 'samples' from a 'distribution D' but does not conduct empirical studies on a publicly available or open dataset. Therefore, no concrete access information for a dataset is provided.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with datasets requiring training, validation, or test splits. It discusses conceptual samples for theoretical analysis.
Hardware Specification No The paper is theoretical and describes algorithms and their properties (sample complexity, runtime). It does not mention any specific hardware used for running experiments because it does not conduct empirical experiments.
Software Dependencies No The paper is theoretical and describes algorithms. It does not mention any specific software (with version numbers) used for running experiments, as no empirical experiments are conducted.
Experiment Setup No The paper is theoretical and focuses on algorithmic design and analysis. It does not provide specific experimental setup details, such as hyperparameters or system-level training settings, as it does not conduct empirical experiments.