Online Active Linear Regression via Thresholding

Authors: Carlos Riquelme, Ramesh Johari, Baosen Zhang

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Simulations suggest the algorithm is remarkably robust: it provides significant benefits over passive random sampling in real-world datasets that exhibit high nonlinearity and high dimensionality significantly reducing both the mean and variance of the squared error.
Researcher Affiliation Academia Carlos Riquelme and Ramesh Johari Stanford University, {rikel, rjohari}@stanford.edu. Baosen Zhang Washington University, zhangbao@uw.edu.
Pseudocode Yes Algorithm 1 Thresholding Algorithm. Algorithm 1 b Adaptive Thresholding Algorithm. Algorithm 2 Sparse Thresholding Algorithm.
Open Source Code No The paper does not provide an explicit statement or link to open-source code for the methodology described.
Open Datasets Yes We show the results of Algorithm 1b (online Σ estimation) with the simplest distributional assumption (Gaussian threshold, ξj = 1) versus random sampling on publicly available real-world datasets (UCI, (Lichman 2013)), measuring test squared prediction error.
Dataset Splits No In each one, we randomly split the dataset in training (n observations, random order), and test (rest of them). The paper mentions training and testing splits, but does not explicitly specify a validation split or its size/methodology.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper describes the use of Lasso estimator and OLS estimator, but does not provide specific software names with version numbers for dependencies.
Experiment Setup No The paper mentions algorithmic parameters like k1 = (2/3)k for recovery, and constants C, λ, and Γ within the algorithms. However, it lacks specific experimental setup details such as learning rates, batch sizes, optimizer types, or other common hyperparameters typically needed for reproducibility in machine learning experiments.