On Deep Unsupervised Active Learning
Authors: Changsheng Li, Handong Ma, Zhao Kang, Ye Yuan, Xiao-Yu Zhang, Guoren Wang
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are performed on six publicly available datasets, and experimental results clearly demonstrate the efficacy of our method, compared with state-of-the-arts. |
| Researcher Affiliation | Academia | Changsheng Li1 , Handong Ma2 , Zhao Kang2 , Ye Yuan1 , Xiao-Yu Zhang3 and Guoren Wang1 1School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China 2SCSE, University of Electronic Science and Technology of China, Chengdu, China 3Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China |
| Pseudocode | No | The paper describes its methods using mathematical equations and textual explanations, but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific repository link or explicit statement about the availability of its own source code. |
| Open Datasets | Yes | Extensive experiments are performed on six publicly available datasets which are widely used for active learning [Baram et al., 2004]. The details of these datasets are summarized in Table 1 1. 1These datasets are downloaded from the UCI Machine Learning Repository |
| Dataset Splits | Yes | Following [Li et al., 2019], for each dataset, we randomly select 50% of the samples as candidates for sample selection, and use the rest as the testing data. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using Adam as the optimizer but does not provide specific version numbers for software libraries, programming languages, or other dependencies. |
| Experiment Setup | Yes | Throughout the experiment, we use three fully connected layers in the encoder and decoder blocks, respectively. The rectified linear unit (Re LU) is used as the non-linear activation function. In addition, we use Adam [Kingma and Ba, 2014] as the optimizer, where the learning rate is set to 1.0 10 4. We set the parameters γ = η for simplicity, and search all tradeoff parameters in our algorithm from {0.01, 0.1, 1, 10}. The number of clusters K is searched from {5, 10, 20, 50}. To evaluate the effectiveness of sample selection, we train a SVM classifier with a linear kernel and C = 100 by using these selected samples as the training data. |