Compact Multi-Label Learning

Authors: Xiaobo Shen, Weiwei Liu, Ivor Tsang, Quan-Sen Sun, Yew-Soon Ong

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on eight real-world datasets demonstrate the superiority of the proposed Co H over the state-of-the-art methods in terms of both prediction accuracy and efficiency.
Researcher Affiliation Academia School of Computer Science and Engineering, Nanyang Technological University School of Computer Science and Engineering, The University of New South Wales Center for Artificial Intelligence, University of Technology Sydney School of Computer and Engineering, Nanjing University of Science and Technology {njust.shenxiaobo, liuweiwei863}@gmail.com, ivor.tsang@uts.edu.au, sunquansen@njust.edu.cn, asysong@ntu.edu.sg
Pseudocode Yes Algorithm 1: Co-Hashing (Co H) and Algorithm 2: Testing Algorithm
Open Source Code No The paper does not contain any explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes 1http://mulan.sourceforge.net 2http://manikvarma.org/downloads/XC/XMLRepository.html We split the training and testing set of two NUS-WIDE, WIKI10, DELICIOUS-L datasets by following publicly available experiment setting (Chua et al. 2009; Bhatia et al. 2015)
Dataset Splits Yes We split the training and testing set of two NUS-WIDE, WIKI10, DELICIOUS-L datasets by following publicly available experiment setting (Chua et al. 2009; Bhatia et al. 2015), while 10-fold cross-validation is applied for the other datasets. ... the k in k NN search is selected using 10-fold cross validation over the range {1, 5, 10, 20} for all k NN-based methods.
Hardware Specification Yes All the computations are performed on a Red Hat Enterprise 64-Bit Linux workstation with 18-core Intel Xeon CPU E5-2680 2.80 GHz processor and 256 GB memory.
Software Dependencies No The paper mentions the use of 'the linear classification/regression package LIBLINEAR (Fan et al. 2008)' but does not specify its version number or any other software dependencies with version details.
Experiment Setup Yes Following the experimental settings (Liu and Tsang 2015), we set η = 0.4 in LM-k NN, and C = 10 in BR and LM-k NN. According to the original settings (Bhatia et al. 2015), we set the number of the clusters as n/6000 and the number of learners as 15 for SLEEC. In the proposed Co H, the regularization parameter α is empirically set to 100 for the two NUS-WIDE datasets, and 1 for the other datasets. The dimension of the common subspace in SLEEC and the proposed Co H is empirically set to 50 for the two NUS-WIDE, WIKI10, DELICIOUS-L datasets, and 100 for the others. Following the similar settings in (Zhang and Zhou 2007; Bhatia et al. 2015), the k in k NN search is selected using 10-fold cross validation over the range {1, 5, 10, 20} for all k NN-based methods.