Multi-Label Structure Learning with Ising Model Selection

Authors: Andre R. Goncalves, Fernando J. Von Zuben, Arindam Banerjee

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments considering several existing multi-label algorithms indicate that the proposed method, while conceptually simple, outperforms the contenders in several datasets and performance metrics. and we have conducted extensive experiments on eight multi-label classification datasets and compared the effectiveness of the proposed formulation in terms of six performance measures.
Researcher Affiliation Collaboration Andr e R Gonc alves CPq D Foundation / University of Campinas, Brazil andrerg@cpqd.com.br; Fernando J Von Zuben FEEC/Unicamp University of Campinas, Brazil vonzuben@dca.fee.unicamp.br; Arindam Banerjee Computer Science Dept. University of Minnesota, USA banerjee@cs.umn.edu
Pseudocode Yes Algorithm 1 I-MTSL algorithm.
Open Source Code No The remaining methods were implemented by the authors (I-MTSL code will be released).
Open Datasets Yes All datasets were downloaded from Mulan webpage1. 1http://mulan.sourceforge.net/datasets-mlc.html
Dataset Splits Yes We selected 20% of the training set to act as validation set (holdout cross-validation) and tested the parameters on a grid containing ten equally spaced values in the interval [0,5].
Hardware Specification No Access to computing facilities were provided by University of Minnesota Supercomputing Institute (MSI). This is a general mention and does not include specific hardware details like GPU/CPU models or memory.
Software Dependencies No For CMTL, Low Rank, and MTL-FEAT we used the MALSAR [Zhou et al., 2011b] package. This mentions a package but does not provide a specific version number. No other specific software versions are listed.
Experiment Setup Yes Logistic regression was used as the base classifier for all algorithms. Z-score normalization was applied to all datasets, then covariates have zero mean and standard deviation one. For all methods, parameters were chosen following the same procedure. We selected 20% of the training set to act as validation set (holdout cross-validation) and tested the parameters on a grid containing ten equally spaced values in the interval [0,5]. The parameter with the best average accuracy over all binary classification problems in the validation set was used in the test set. The results presented in the next sections are based on ten independent runs of each algorithm.