Partial Multi-Label Learning with Noisy Label Identification
Authors: Ming-Kun Xie, Sheng-Jun Huang6454-6461
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on synthetic as well as real-world data sets validate the effectiveness of the proposed approach. Experiment Experimental Setting We perform experiments on totally ten data sets including synthetic as well as real-world PML data sets. |
| Researcher Affiliation | Academia | College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics MIIT Key Laboratory of Pattern Analysis and Machine Intelligence Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing, 211106 {mkxie, huangsj}@nuaa.edu.cn |
| Pseudocode | Yes | The main steps are summarized as follows: Choose θ0 = θ 1 (0, 1], L > 1, U0 = U 1, η > 1. Set k = 0. In the k-th iteration, Set Zk = Uk + θk(θ 1 k 1 1)(Uk Uk 1) Set Uk+1 = argmin U{h (U, Zk) + L 2 U Zk 2 F} while g(Uk+1) + β Uk+1 tr > h (Uk+1, Zk) + L 2 Uk+1 Zk 2 F: Increase L = ηL Uk+1 = argmin U{h (U, Zk) + L 2 U Zk 2 F} θ4 k + 4θ2 k θ2 k/2 Update k = k + 1 The iteration continues until convergence. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the proposed PML-NI method is publicly available. |
| Open Datasets | Yes | Publicly available at http://mulan.sourceforge.net/datasets.html and http://meka.sourceforge.net/#datasets |
| Dataset Splits | No | The paper describes how 'partial multi-label assignments for the training data' were constructed for 8 datasets by simulating annotation, but it does not specify explicit training/validation/test dataset splits (e.g., percentages, sample counts, or k-fold cross-validation setup) needed for reproduction. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or specific computing environments used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Libsvm (Chang and Lin 2011) is used as the base learner', but it does not provide a specific version number for Libsvm or any other software dependencies. |
| Experiment Setup | Yes | For PML-NI, balancing parameters are set as λ = 1, β = 1 and γ = 0.5. For the comparing methods, parameters are set as suggested in the original paper, i.e., PAR-VAL and PAR-MAP: balancing parameter α = 0.95 and credible label elicitation threshold thr = 0.9; PML-LRS: balancing parameters are set as γ = 0.01, β = 0.1 and η = 1. For CPLST, we take the first 5 principle components following the experimental setting in (Wang et al. 2019). k is set as 10 for all the nearest neighbor based algorithms. |