Active Learning from Imperfect Labelers
Authors: Songbai Yan, Kamalika Chaudhuri, Tara Javidi
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We propose an algorithm which utilizes abstention responses, and analyze its statistical consistency and query complexity under fairly natural assumptions on the noise and abstention rate of the labeler. This algorithm is adaptive in a sense that it can automatically request less queries with a more informed or less noisy labeler. We couple our algorithm with lower bounds to show that under some technical conditions, it achieves nearly optimal query complexity. |
| Researcher Affiliation | Academia | Songbai Yan University of California, San Diego yansongbai@eng.ucsd.edu Kamalika Chaudhuri University of California, San Diego kamalika@cs.ucsd.edu Tara Javidi University of California, San Diego tjavidi@eng.ucsd.edu |
| Pseudocode | Yes | Algorithm 1 The active learning algorithm for learning thresholds; Algorithm 3 The active learning algorithm for the smooth boundary fragment class; Procedure 2 Adaptive sequential testing |
| Open Source Code | No | The paper does not provide any statement about releasing source code or links to a code repository for the described methodology. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments on specific datasets. Therefore, it does not provide access information for a publicly available or open dataset for training. |
| Dataset Splits | No | The paper is theoretical and does not report on empirical experiments with datasets, thus no dataset split information (training/validation/test) is provided. |
| Hardware Specification | No | The paper is theoretical and does not describe any empirical experiments, therefore no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe any empirical experiments, therefore no software dependencies with version numbers are mentioned. |
| Experiment Setup | No | The paper is theoretical and does not describe any empirical experiments with specific hyperparameter values or training configurations. |