Convergence of Uncertainty Sampling for Active Learning

Authors: Anant Raj, Francis Bach

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we perform an experimental evaluation for our proposed uncertainty sampling based active learning algorithm. The experiments are performed on both synthetic as well as real world data.
Researcher Affiliation Academia 1 Inria, Ecole Normale Superieure, PSL Research University, Paris, France 2 Department of Electrical and Computer Engineering, Coordinated Science Laboratory University of Illinois at Urbana-Champaign, USA. Correspondence to: Anant Raj <anant.raj@inria.fr>.
Pseudocode Yes Algorithm 1 Uncertainty Sampling in Binary Classification Algorithm 2 Uncertainty Sampling in Multi-Class Classification
Open Source Code No No explicit statement or link is provided for open-source code for the methodology described in the paper.
Open Datasets Yes Normalized binary version of datasets are downloaded from manikvarma.org/code/LDKL/download.html.
Dataset Splits No The paper does not explicitly provide training/validation/test splits or percentages for any of the datasets.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments are mentioned.
Software Dependencies No The paper mentions using "randomized Fourier features" but does not provide specific version numbers for any software dependencies.
Experiment Setup No The paper specifies parameters like 'n' and 'µ' for synthetic data, but does not provide a comprehensive list of hyperparameters, training configurations, or a dedicated experimental setup section with all necessary details.