Tester-Learners for Halfspaces: Universal Algorithms

Authors: Aravind Gollakota, Adam Klivans, Konstantinos Stavropoulos, Arsen Vasilyan

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We give the first tester-learner for halfspaces that succeeds universally over a wide class of structured distributions. Our universal tester-learner runs in fully polynomial time and has the following guarantee: the learner achieves error O(opt) + on any labeled distribution that the tester accepts..." and the paper primarily consists of definitions, theorems, lemmas, propositions, and their proofs without any experimental evaluation on datasets.
Researcher Affiliation Collaboration Aravind Gollakota Apple aravindg@cs.utexas.edu Adam R. Klivans UT Austin klivans@cs.utexas.edu Konstantinos Stavropoulos UT Austin kstavrop@cs.utexas.edu Arsen Vasilyan MIT vasilyan@mit.edu
Pseudocode Yes The algorithm receives 1, > 0, > 0 and (0, 1/2) {1} (say = 1 when we are in the agnostic case) and does the following for some appropriately large universal constant C1, C2 > 0. 1. First, create a set of parameters and parameters E = C1 C1 and A > 0 as follows. ... 8. Otherwise, accept and output the vector w that achieves the smallest empirical error on S2 among the vectors in the list L.
Open Source Code No The paper does not contain any statements about making code publicly available or provide links to a code repository.
Open Datasets No The paper describes drawing 'samples from DXY' and 'i.i.d. samples from D' for theoretical analysis and algorithm description, but it does not specify any named, publicly available datasets or provide access information for empirical training data.
Dataset Splits No The paper is theoretical and does not describe empirical experiments with specific datasets. Therefore, it does not provide details on training, validation, or test dataset splits.
Hardware Specification No This is a theoretical paper that focuses on algorithm design and proofs, rather than empirical experiments. Therefore, no specific hardware used for running experiments is described.
Software Dependencies No This is a theoretical paper and does not describe empirical implementations or experiments. Consequently, it does not list specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup No This is a theoretical paper that focuses on algorithm design and proofs. It does not describe an empirical experimental setup, and therefore, does not include details such as hyperparameter values or system-level training settings.