A Unified View of Multi-Label Performance Measures
Authors: Xi-Zhu Wu, Zhi-Hua Zhou
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | empirical results validate our theoretical findings. The rest of the paper is organized as follows. Section 5 reports the results of experiments. We conduct experiments with LIMO on both synthetic and benchmark data. |
| Researcher Affiliation | Academia | National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China. |
| Pseudocode | Yes | Algorithm 1 LIMO |
| Open Source Code | No | The paper does not provide any explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | Five benchmark multi-label datasets are used in our experiments. We choose them because they denote different domains: (i) A music dataset CAL500, (ii) an email dataset enron, (iii) a clinical text dataset medical, (iv) an image dataset corel5k, (v) a tagging dataset bibtex. We randomly split each dataset into two parts, i.e., 70% for training and 30% for testing. The experiments are repeated ten times, and the averaged results are reported. (Footnote: http://mulan.sourceforge.net/datasets-mlc.html) |
| Dataset Splits | No | The paper explicitly mentions a "70% for training and 30% for testing" split but does not specify a separate validation set. |
| Hardware Specification | No | The paper does not provide any specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper mentions using "L2-regularized SVM" and implies use of general machine learning libraries but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We randomly split each dataset into two parts, i.e., 70% for training and 30% for testing. The experiments are repeated ten times, and the averaged results are reported. The step size of SGD is set to 0.01. For BR, L2-regularized SVM (Chang & Lin, 2011) with C=1 is used as base learner. For ML-k NN and GFM, the number of nearest neighbors is 10. LIMO (λ1 = λ2 = 1) to LIMO-inst (λ1 = 0, λ2 = 1) and LIMOlabel (λ1 = 1, λ2 = 0). |