Active Regression by Stratification

Authors: Sivan Sabato, Remi Munos

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We propose a new active learning algorithm for parametric linear regression with random design. We provide finite sample convergence guarantees for general distributions in the misspecified model. This is the first active learner for this setting that provably can improve over passive learning.The new active learner algorithm and its analysis are provided in Section 5, with the main result stated in Theorem 5.1. Theorem 5.1 is be proved via a series of lemmas.
Researcher Affiliation Collaboration Sivan Sabato Department of Computer Science Ben Gurion University, Beer Sheva, Israel sabatos@cs.bgu.ac.il Remi Munos INRIA Lille, France remi.munos@inria.fr Current Affiliation: Google Deep Mind.
Pseudocode Yes Algorithm 1 Active Regression input Confidence δ (0, 1), label budget m, partition A. output ˆw Rd
Open Source Code No The paper does not contain any statement about releasing source code or providing a link to a code repository.
Open Datasets No The paper is theoretical and does not describe experiments using publicly available datasets for training or evaluation.
Dataset Splits No The paper is theoretical and does not specify any dataset splits (training, validation, test) for empirical data.
Hardware Specification No The paper is theoretical and does not provide any details about hardware used for experiments.
Software Dependencies No The paper is theoretical and does not specify any software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not include details on an experimental setup, hyperparameters, or system-level training settings.