Learning by Actively Querying Strong Modal Features
Authors: Yang Yang, De-Chuan Zhan, Yuan Jiang
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on image datasets show that ACQUEST achieves better classification performance than conventional active learning and multi-modal learning methods with less feature acquisition costs and labeling expenses. |
| Researcher Affiliation | Academia | Yang Yang and De-Chuan Zhan and Yuan Jiang National Key Laboratory for Novel Software Technology, Nanjing University, Collaborative Innovation Center of Novel Software Technology and Industrialization Nanjing, 210023, China {yangy, zhandc, jiangy}@lamda.nju.edu.cn |
| Pseudocode | Yes | Algorithm 1 The ACQUEST Algorithm |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | A subset of NUS [Chua et al., 2009] contains 9,109 images of 10 categories... MSRA [Wang et al., 2009] subset contains 10,680 images of 9 categories... Animal [Christoph et al., 2009] (represented as ANIM in short in the following content) subset contains 30475 images of 50 animals classes |
| Dataset Splits | No | For all datasets, 66% instances are randomly picked up for training, and the remains are used as test set. The labeled ratio is set to 10% for training set. The paper specifies train and test splits, but no explicit validation split. |
| Hardware Specification | No | No specific hardware details (e.g., CPU, GPU models, memory, or cloud instance types) used for running the experiments are provided in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks) are mentioned in the paper. |
| Experiment Setup | Yes | For all datasets, 66% instances are randomly picked up for training, and the remains are used as test set. The labeled ratio is set to 10% for training set. All experiments are repeated for 30 times. During the training phase, at most 30 unlabeled instances are automatically selected for strong modal feature value querying in each iteration... In all experiments, the parameters λ1 and λ2 in the training phase are tuned in 10-1, 1, 10. Empirically ACQUEST converges when the difference of the objective value of Eq. 3 is less than 10-5. |