Active Learning for Crowdsourcing Using Knowledge Transfer

Authors: Meng Fang, Jie Yin, Dacheng Tao

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both text and image datasets demonstrate that our proposed method outperforms other state-of-the-art active learning methods.
Researcher Affiliation Collaboration Centre for Quantum Comp. & Intelligent Sys, University of Technology, Sydney, Australia Computational Informatics, CSIRO, Australia Meng.Fang@student.uts.edu.au, Jie.Yin@csiro.au, Dacheng.Tao@uts.edu.au
Pseudocode Yes Algorithm 1 Active Learning with Multiple Labelers
Open Source Code No The paper does not mention providing access to open-source code for the described methodology.
Open Datasets Yes We first carried out experiments on a publicly available corpus of scientific texts (Rzhetsky, Shatkay, and Wilbur 2009), and the other is an image dataset collected via the AMT (Welinder et al. 2010).
Dataset Splits Yes In our experiments we report average accuracies based on 10-fold cross-validation. For each run, we initially started with a small set of labeled data (30% of training data) before making queries for different active learning algorithms.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, memory, or cloud instance types) used for running its experiments.
Software Dependencies No The paper mentions techniques like 'logistic regression', 'SC', and 'L-BFGS quasi-Newton method', but does not provide specific version numbers for any software dependencies or libraries used.
Experiment Setup No The paper mentions using 'logistic regression as the base classifier' and describes the active learning algorithm steps, but it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings.