Scaling Active Search using Linear Similarity Functions

Authors: Sibi Venkatesan, James K. Miller, Jeff Schneider, Artur Dubrawski

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments, we show that our method is competitive with existing semi-supervised approaches.
Researcher Affiliation Academia Sibi Venkatesan, James K. Miller, Jeff Schneider and Artur Dubrawski Auton Lab, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA {sibiv, schneide, awd}@cs.cmu.edu, mille856@andrew.cmu.edu
Pseudocode Yes Algorithm 1 LAS: Linearized Active Search
Open Source Code No The paper does not provide a link or explicit statement about the availability of the source code for the described methodology.
Open Datasets Yes We performed experiments on the following datasets: the Cover Type and Adult datasets from the UCI Machine Learning Repository and MNIST.
Dataset Splits No The paper does not provide specific train/validation/test splits. It describes an active learning setup where points are iteratively queried and moved from an unlabeled set to a labeled set, and initializes with one randomly chosen positive.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup Yes For LAS, we took α (the coefficient for the Impact Factor) to be the best from empirical evaluations. This was 10 6 for Cover Type and Adult, and 0 for MNIST. π was taken as the true positives prevalence.