Multilabel Classification with Label Correlations and Missing Labels

Authors: Wei Bi, James Kwok

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on a number of real-world data sets with both complete and missing labels demonstrate that the proposed algorithm can consistently outperform stateof-the-art multilabel classification algorithms. In this section, experiments are performed on five image annotation data sets3 (Table 1) used in (Guillaumin et al. 2009).
Researcher Affiliation Academia Wei Bi James T. Kwok Department of Computer Science and Engineering Hong Kong University of Science and Technology Hong Kong {weibi, jamesk}@cse.ust.hk
Pseudocode No The paper describes mathematical formulations and optimization subproblems, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No Code is from http://www.cs.berkeley.edu/ bharath2/codes/ M3L/download.html, Code is from http://www.cse.msu.edu/ bucakser/software. html, Code is from http://www.cse.wustl.edu/ mchen/ (These refer to third-party code, not the proposed method's code). No explicit statement or link is provided for the authors' own open-source code for the proposed method.
Open Datasets Yes In this section, experiments are performed on five image annotation data sets3 (Table 1) used in (Guillaumin et al. 2009). 3http://lear.inrialpes.fr/people/guillaumin/data.php
Dataset Splits Yes Parameter tuning for all the methods is based on a validation set obtained by randomly sampling 30% of the training data. Results based on 5-fold cross-validation are shown in Table 2.
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes Parameter tuning for all the methods is based on a validation set obtained by randomly sampling 30% of the training data. As in (Guillaumin et al. 2009), we avoid this problem by predicting as positive the five labels with the largest prediction scores.