Privileged Multi-label Learning
Authors: Shan You, Chang Xu, Yunhe Wang, Chao Xu, Dacheng Tao
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark datasets show that through privileged label features, the performance can be significantly improved and Pr ML is superior to several competing methods in most cases. |
| Researcher Affiliation | Academia | 1Key Lab. of Machine Perception (MOE), School of EECS, Peking University, P. R. China 2Cooperative Medianet Innovation Center, Peking University, P. R. China 3UBTech Sydney AI Institute, School of IT, FEIT, The University of Sydney, Australia |
| Pseudocode | Yes | Algorithm 1 Privileged Multi-label Learning (Pr ML) |
| Open Source Code | No | The paper does not provide any statement or link regarding the public availability of its source code. |
| Open Datasets | Yes | We select six benchmark multi-label datasets, including enron, yeast, corel5k, bibtex, eurlex and mediamill. Specially, we consider the cases when d (eurlex), L (eurlex) and n (corel5k, bibtex, eurlex & mediamill) are large respectively. Also note that enron, corel5k, bibtex and eurlex are of sparse features. See Table 1 for the details of these datasets. |
| Dataset Splits | Yes | In our experiment, parameter γ1 is set to be 1; γ2 and C are in the range of 0.1 2.0 and 10 3 20 respectively, and determined using cross validation by a part of training points. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for experiments are mentioned in the paper. |
| Software Dependencies | No | The paper mentions methods like SVM, but does not provide specific version numbers for any software dependencies or libraries used for implementation. |
| Experiment Setup | Yes | In our experiment, parameter γ1 is set to be 1; γ2 and C are in the range of 0.1 2.0 and 10 3 20 respectively, and determined using cross validation by a part of training points. |