Unsupervised Active Learning via Subspace Learning

Authors: Changsheng Li, Kaihang Mao, Lingyan Liang, Dongchun Ren, Wei Zhang, Ye Yuan, Guoren Wang8332-8339

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are performed on five publicly available datasets, and experimental results demonstrate the proposed first formulation achieves comparable performance with the state-of-the-arts, while the second formulation significantly outperforms them, achieving a 13% improvement over the second best baseline at most. Extensive experiments are performed on multiple tasks usually requiring high annotation costs, and experimental results on five publicly available datasets demonstrate the efficacy of the proposed models. We evaluate the proposed methods on two video action recognition datasets HMDB51 (Kuehne et al. 2011) and UCF50 (Reddy and Shah 2013), one facial age estimation datasets UTKFace (Zhang, Song, and Qi 2017), one medical image dataset HAM10000 (Tschandl, Rosendahl, and Kittler 2018), and one wine quality dataset (Cortez et al. 2009).
Researcher Affiliation Collaboration 1School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China 2Inspur Group Company Limited, China, 3Meituan, Beijing, China 4School of Information and Communication Engineering, University of Electronic Science and Technology of China
Pseudocode Yes Algorithm 1: Optimization Procedure for Solving Formulation II
Open Source Code No The paper does not provide any explicit statements about making the source code available or include links to code repositories.
Open Datasets Yes We evaluate the proposed methods on two video action recognition datasets HMDB51 (Kuehne et al. 2011) and UCF50 (Reddy and Shah 2013), one facial age estimation datasets UTKFace (Zhang, Song, and Qi 2017), one medical image dataset HAM10000 (Tschandl, Rosendahl, and Kittler 2018), and one wine quality dataset (Cortez et al. 2009).
Dataset Splits No The paper mentions training an SVM classifier on selected samples and evaluating on unseen samples, but it does not specify explicit dataset splits (e.g., percentages or counts) for training, validation, or testing, nor does it mention cross-validation specifics for its models.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions using an 'SVM classifier' but does not specify the software package or its version number (e.g., scikit-learn, with version).
Experiment Setup Yes The parameters λ, µ and η in our algorithm are searched from {0.001, 0.01, 0.1, 1, 10}. In the experiment, we repeat every test case 5 times, and report the average result and standard deviation. In the experiment, the number of the selected samples is set to 50.