Online Active Learning of Reject Option Classifiers

Authors: Kulin Shah, Naresh Manwani5652-5659

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide extensive experimental results to show the effectiveness of the proposed algorithms. The proposed algorithms efficiently reduce the number of label examples required.
Researcher Affiliation Academia Kulin Shah, Naresh Manwani Machine Learning Lab, KCIS, IIIT Hyderabad, India kulin.shah@students.iiit.ac.in, naresh.manwani@iiit.ac.in
Pseudocode Yes Algorithm 1 Double Ramp Loss Active Learning (DRAL)... Algorithm 2 Double Sigmoid Loss Active Learning (DSAL)
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes We show the effectiveness of the proposed active learning approaches on Gisette, Phishing and Guide datasets available on UCI ML repository (Lichman 2013).
Dataset Splits No The paper does not explicitly state training, validation, and test dataset splits with percentages or sample counts.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks).
Experiment Setup Yes In all our simulations, we initialize step size by a small value, and after every trial, step size decreases by a small constant. Parameter α in the double sigmoid loss function is chosen to minimize the average risk and average fraction of queried labels (averaged over 100 runs).