Reliable Learning of Halfspaces under Gaussian Marginals

Authors: Ilias Diakonikolas, Lisheng Ren, Nikos Zarifis

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our main positive result is a new algorithm for reliable learning of Gaussian halfspaces on Rd with sample and computational complexity d O(log(min{1/α,1/ϵ})) min(2log(1/ϵ)O(log(1/α)), 2poly(1/ϵ)) , where ϵ is the excess error and α is the bias of the optimal halfspace. We complement our upper bound with a Statistical Query lower bound suggesting that the dΩ(log(1/α)) dependence is best possible.
Researcher Affiliation Academia Ilias Diakonikolas University of Wisconsin-Madison ilias@cs.wisc.edu Lisheng Ren University of Wisconsin-Madison lren29@wisc.edu Nikos Zarifis University of Wisconsin-Madison zarifis@cs.wisc.edu
Pseudocode Yes Algorithm 1: Reliably Learning General Halfspaces with Gaussian Marginals. Algorithm 2: Finding a Direction with High Correlation. Algorithm 3: Reliably Learning General Halfspaces with Gaussian Marginals (detailed version of Algorithm 1).
Open Source Code No The paper is theoretical and does not mention providing open-source code for the methodology.
Open Datasets No The paper is theoretical and does not use or provide access information for a publicly available dataset for training. It refers to a theoretical distribution D supported on Rd { 1}.
Dataset Splits No The paper is theoretical and does not conduct experiments or specify training/test/validation dataset splits.
Hardware Specification No The paper is theoretical and does not describe any experiments that would require hardware specifications.
Software Dependencies No The paper is theoretical and does not describe any experiments that would require software dependencies with specific version numbers.
Experiment Setup No The paper is theoretical and does not describe any experimental setup or hyperparameters.