A Near-optimal Algorithm for Learning Margin Halfspaces with Massart Noise
Authors: Ilias Diakonikolas, Nikos Zarifis
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our main result is a computationally efficient learner with sample complexity eΘ(1/(γ2ϵ2)), nearly matching this lower bound. In addition, our algorithm is simple and practical, relying on online SGD on a carefully selected sequence of convex losses. The paper is theoretical in nature and does not include experiments. |
| Researcher Affiliation | Academia | Ilias Diakonikolas Department of Computer Sciences UW-Madison Madison, WI ilias@cs.wisc.edu Nikos Zarifis Department of Computer Sciences UW-Madison Madison, WI zarifis@wisc.edu |
| Pseudocode | Yes | Algorithm 1: Learning Margin Halfspaces with Massart Noise |
| Open Source Code | No | The paper is theoretical in nature and does not include experiments. The NeurIPS checklist responses within the paper indicate 'NA' for code and data access questions, and no explicit statement or link for code release is found. |
| Open Datasets | No | The paper is theoretical and analyzes algorithms in terms of sample complexity from a distribution D, but it does not specify or use any named public or open datasets for training. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments, therefore no training, validation, or test dataset splits are described. |
| Hardware Specification | No | The paper is theoretical in nature and does not include experiments, therefore no specific hardware specifications are provided. |
| Software Dependencies | No | The paper is theoretical and does not conduct experiments, therefore no specific software dependencies with version numbers are mentioned. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with specific hyperparameters or training configurations. |