Multi-instance multi-label active learning
Authors: Sheng-Jun Huang, Nengneng Gao, Songcan Chen
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark datasets demonstrate that the proposed approach achieves superior performance on various criteria. |
| Researcher Affiliation | Academia | Sheng-Jun Huang and Nengneng Gao and Songcan Chen College of Computer Science & Technology, Nanjing University of Aeronautics & Astronautics Collaborative Innovation Center of Novel Software Technology and Industrialization {huangsj, gaonn, s.chen}@nuaa.edu.cn |
| Pseudocode | Yes | Algorithm 1 The MIML-AL Algorithm |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology. |
| Open Datasets | Yes | Among the public available MIML datasets, there are four datasets, i.e., MSRC [Winn et al., 2005], Letter Frost, Letter Carroll and Bird Song [Briggs et al., 2012] where the labels of instances are available. |
| Dataset Splits | No | The paper mentions random sampling of 20% of bags as test data and 5% for initial labeling, but does not explicitly describe a separate validation set or split for hyperparameter tuning or early stopping during training. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies used in the experiments. |
| Experiment Setup | Yes | For MIML-AL, we fix the parameters b = 200, C = 10 for all datasets. |