Active Learning with Cross-Class Knowledge Transfer

Authors: Yuchen Guo, Guiguang Ding, Yuqi Wang, Xiaoming Jin

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on three benchmark datasets and the results demonstrate the efficacy of the proposed method. We carry out extensive experiments on three benchmark datasets. The experimental results demonstrate that the proposed method can significantly reduce the labeling efforts in comparison to traditional active learning methods.
Researcher Affiliation Academia Yuchen Guo, Guiguang Ding, Yuqi Wang, and Xiaoming Jin School of Software, Tsinghua University, Beijing 100084, China {yuchen.w.guo,wangyuqi10}@gmail.com, {dinggg,xmjin}@tsinghua.edu.cn,
Pseudocode Yes Algorithm 1 Cross-class Transfer Active Learning
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes The first dataset is Animal with Attributes (Aw A) (Lampert, Nickisch, and Harmeling 2014). [...] The second dataset is a Pascal-a Yahoo (a PY) (Farhadi et al. 2009). [...] The third dataset is the SUN fine-grained scene recognition dataset (Patterson and Hays 2012).
Dataset Splits Yes Following the settings in active learning (Chattopadhyay et al. 2013), we equally split the target domain samples into two parts. We use one part to train classifiers with active learning, i.e., Dtr t . And the other part is the unseen test set, i.e., Dte t . and In this paper we propose to perform k-fold cross-validation as follows. We split equally the source domain classes into k parts. In each fold, we use 1 part as the target domain and the other k 1 parts as the source domain. [...] Specifically, we set k = 4, 4, and 10 for Aw A, a PY and SUN respectively, and the values of α and β are chosen from {0.01, 0.1, 1, 10, 100}.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'Liblinear SVM' and 'De CAF' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes For all active learning methods, we select 10, 12, and 10 unlabeled samples for human labeling for Aw A, a PY, and SUN datasets respectively in each iteration, i.e., each class has one sample in average. and The performance is evaluated by the classification accuracy on the unseen test data Dte t after retraining in each iteration. and For our method, we need to determine the hyper parameters α and β. In this paper we propose to perform k-fold cross-validation as follows. [...] Specifically, we set k = 4, 4, and 10 for Aw A, a PY and SUN respectively, and the values of α and β are chosen from {0.01, 0.1, 1, 10, 100}.