Partial Multi-Label Learning
Authors: Ming-Kun Xie, Sheng-Jun Huang
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on various datasets show that the proposed approach is effective for solving PML problems. |
| Researcher Affiliation | Academia | College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics Collaborative Innovation Center of Novel Software Technology and Industrialization Nanjing 211106, China {mkxie,huangsj}@nuaa.edu.cn |
| Pseudocode | Yes | Algorithm 1 The PML-fp algorithm |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for the methodology described is publicly available. |
| Open Datasets | Yes | We perform the experiments on seven datasets. These data sets spanned a broad range of applications: corel5k for image annotation, CAL500 and emotions for music classification, yeast for gene function prediction, genbase for protein classification, medical for text categorization and delicious for web categorization. |
| Dataset Splits | No | The paper mentions 'training set' and 'test phase' but does not provide specific details about the training/validation/test splits, such as percentages, sample counts, or a cross-validation setup. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as CPU or GPU models. |
| Software Dependencies | No | The paper mentions various multi-label learning methods for comparison (e.g., Rank SVM, ML-k NN) and optimization techniques (quadratic programming, linear programming), but it does not specify any software names with version numbers. |
| Experiment Setup | Yes | For PML-lc, C1 is fixed to 1 as default on all datasets. C2 is selected from {1, 2, ..., 10}, and C3 is selected from {1, 10, 100} with regard to the performance on hamming loss. The alternating optimization procedure iterates, and terminates once the objective function converges or iter exceeds a maximal number predefined by users. |