Weakly Supervised Multi-Label Learning via Label Enhancement
Authors: JiaQi Lv, Ning Xu, RenYi Zheng, Xin Geng
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments across a wide range of real-world datasets clearly validate the superiority of the proposed approach. |
| Researcher Affiliation | Academia | MOE Key Laboratory of Computer Network and Information Integration, China School of Computer Science and Engineering, Southeast University, Nanjing 210096, China {lvjiaqi, xning, zhengry, xgeng}@seu.edu.cn |
| Pseudocode | No | The paper describes procedures and equations but does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | A total of 14 real-world LDL datasets are employed for performance evaluation 1. The binarization method [Xu et al., 2018b] is adopted to get the logical labels from the real label distributions. 1http://palm.seu.edu.cn/xgeng/LDL/index.htm#data 2http://mulan.sourceforge.net/datasets-mlc.html 3http://www.kecl.ntt.co.jp/as/members/ueda/yahoo.tar |
| Dataset Splits | Yes | Half of the instances in each dataset are randomly chosen as the training set while the other half as the test set. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments. |
| Software Dependencies | No | The paper mentions software components and algorithms such as 'Gaussian kernel', 'k-means', and 'QP toolbox' but does not specify their version numbers for reproducibility. |
| Experiment Setup | Yes | The parameter λ in WSMLLE is chosen among {0.01, 0.1, 1} and the number of clusters g is chosen among {1, 2, ..., 10}. The kernel function is Gaussian kernel. |