Multi-label Classification via Feature-aware Implicit Label Space Encoding

Authors: Zijia Lin, Guiguang Ding, Mingqing Hu, Jianmin Wang

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on benchmark datasets well demonstrate its effectiveness.
Researcher Affiliation Academia 1Department of Computer Science and Technology, Tsinghua University, Beijing, P.R. China 2School of Software, Tsinghua University, Beijing, P.R. China 3Institute of Computing Technology, Chinese Academy of Sciences, Beijing, P.R. China
Pseudocode Yes Algorithm 1 Implementation of Fa IE
Open Source Code No The paper does not mention releasing source code for the described methodology or provide any links to a code repository.
Open Datasets Yes To validate the proposed Fa IE, we download three benchmark datasets in different domains with relatively large vocabularies from Mulan (Tsoumakas et al., 2011) for experiments, i.e. delicious, CAL500, and mediamill. [...] Moreover, following (Hsu et al., 2009), we also conduct experiments on a randomly selected subset of the image dataset ESPGame, and take those tags appearing at least twice in the subset to form a much larger vocabulary.
Dataset Splits Yes In our experiments, each dataset is evenly and randomly divided into 5 parts. And then we perform 5 runs for each algorithm on it, taking one part for test and the rest for training in each run without duplication. [...] Moreover, for each run of any algorithm, we also conduct 5-fold cross validation on the training set for selecting model parameters via grid search in predefined value ranges.
Hardware Specification Yes All algorithms are conducted with Matlab on a server with an Intel Xeon E5620 CPU and 24G RAM, except that BR with L-SVM is conducted using LIBLINEAR (Fan et al., 2008).
Software Dependencies No The paper mentions 'Matlab' and 'LIBLINEAR' as software used, but does not provide specific version numbers for these components.
Experiment Setup Yes Specifically, α in the proposed Fa IE is selected from {10 1, 100, . . . , 104}, τ for MLC-BMa D is chosen from {0.1, 0.2, . . . , 1.0}, and the predefined sparsity level in CS is selected from {1, 2, . . . , M} with M being the maximal number of labels in an instance, etc.