Discovering Latent Class Labels for Multi-Label Learning
Authors: Jun Huang, Linchuan Xu, Jing Wang, Lei Feng, Kenji Yamanishi
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show a competitive performance of DLCL against other state-of-the-art MLL approaches. |
| Researcher Affiliation | Academia | Jun Huang1,2, , Linchuan Xu1 , Jing Wang3 , Lei Feng4, and Kenji Yamanishi1 1Graduate School of Information Science and Technology, The University of Tokyo. 2School of Computer Science and Technology, Anhui University of Technology. 3School of Computing and Mathematical Sciences, University of Greenwich. 4School of Computer Science and Engineering, Nanyang Technological University. |
| Pseudocode | Yes | Algorithm 1 Training of DLCL |
| Open Source Code | No | The paper provides links to code for *other* methods (MLkNN, LLSF, KRAM) but does not provide a link or explicit statement for the open-sourcing of the DLCL method described in this paper. |
| Open Datasets | Yes | Table 1 shows a summarization of the twelve experimental data sets. ... arts, bibtex, corel16k001, corel16k002, corel5k, education, medical, rcv1v2(subset1), stackex-chemistry, stackex-cooking, stackex-cs, stackex-philosophy. |
| Dataset Splits | Yes | Parameter tuning for each of them is based on a 5-fold cross validation over the training data of each data set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions general algorithms and learners (e.g., 'Linear Regression', 'proximal gradient descend algorithm'), but does not provide specific software dependencies with version numbers (e.g., library names with specific versions like 'PyTorch 1.9' or 'scikit-learn 0.24'). |
| Experiment Setup | Yes | Parameters λ1 and λ2 are tuned in {10i|i = 2, ..., 6}, λ3 is tuned in {2i|i = 2, ..., 4}, λ4 is tuned in {10i|i = 2, ..., 1}, and α is tuned in {0.4, 0.5, 0.6}. |