Discriminative and Correlative Partial Multi-Label Learning
Authors: Haobo Wang, Weiwei Liu, Yang Zhao, Chen Zhang, Tianlei Hu, Gang Chen
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on various real-world datasets clearly validate the superiority of our proposed method. Section 4 reports our experimental results on various real-world datasets. |
| Researcher Affiliation | Academia | Key Lab of Intelligent Computing Based Big Data of Zhejiang Province, Zhejiang University, College of Computer Science and Technology, Zhejiang University, School of Computer Science, Wuhan University |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | These datasets are collected from various real-world tasks: Image [Fang and Zhang, 2019] and Scene [Boutell et al., 2004] for image annotation, Slashdot [Read et al., 2011] for text categorization, Cal500 [Turnbull et al., 2008] and Emotions [Trohidis et al., 2008] for music classification. |
| Dataset Splits | No | The paper mentions 'All the datasets are randomly partitioned to 80% for training and the rest for testing.' but does not specify a validation split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Scikit-learn s [Pedregosa et al., 2011] implementation' but does not specify a version number for Scikit-learn or any other key software dependencies. |
| Experiment Setup | Yes | In this paper, δ1, δ2 are empirically set as 0.01, 1 for Cal500 and 0.01, 0.5 for other datasets. The number of boosting rounds is fixed to 10. k is set as 10 for all the nearest neighbor based algorithms. For CPLST, we take the first 5 principal components. Following the experimental setting in [Fang and Zhang, 2019], we set thr = 0.9 and α = 0.95 for PARTICLE. |