Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Minimax Analysis of Active Learning

Authors: Steve Hanneke, Liu Yang

JMLR 2015 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This work establishes distribution-free upper and lower bounds on the minimax label complexity of active learning with general hypothesis classes, under various noise models. The results reveal a number of surprising facts. In particular, under the noise model of Tsybakov (2004), the minimax label complexity of active learning with a VC class is always asymptotically smaller than that of passive learning, and is typically significantly smaller than the best previously-published upper bounds in the active learning literature. ... We also propose new active learning strategies that nearly achieve these minimax label complexities.
Researcher Affiliation Collaboration Steve Hanneke EMAIL Princeton, NJ 08542 Liu Yang EMAIL IBM T. J. Watson Research Center, Yorktown Heights, NY 10598
Pseudocode Yes Algorithm 1 Input: label budget n Output: classifier ˆhn ... Subroutine 1 Input: label budget n, data point index m Output: query counter q, value y
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide links to a code repository. It does not mention code in supplementary materials.
Open Datasets No The paper is theoretical and focuses on mathematical bounds and active learning strategies. It defines abstract concepts like an "instance space X" and "label space Y" and does not mention using any specific real-world or synthetic datasets for empirical evaluation.
Dataset Splits No Since the paper does not use any datasets for empirical evaluation, there is no discussion of dataset splits.
Hardware Specification No The paper is purely theoretical and does not describe any experiments that would require hardware specifications.
Software Dependencies No The paper focuses on theoretical analysis and algorithms rather than implementation details. It does not mention any specific software packages or versions used for conducting experiments.
Experiment Setup No The paper is theoretical and does not describe any experimental setup, hyperparameters, or training configurations for empirical validation.