Collaboration Based Multi-Label Learning

Authors: Lei Feng, Bo An, Shuo He3550-3557

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results show that our approach outperforms the state-of-the-art counterparts. In this section, we conduct extensive experiments on various datasets to validate the effectiveness of CAMEL.
Researcher Affiliation Collaboration Lei Feng,1,2 Bo An,1 Shuo He3 1School of Computer Science and Engineering, Nanyang Technological University, Singapore 2Alibaba-NTU Singapore Joint Research Institute, Singapore 3College of Computer and Information Science, Southwest University, Chongqing, China
Pseudocode Yes Algorithm 1 The CAMEL Algorithm
Open Source Code No The paper does not provide any explicit statement about releasing open-source code or a link to a code repository for the described methodology.
Open Datasets Yes For comprehensive performance evaluation, we collect sixteen benchmark multi-label datasets. For each dataset S, we denote by |S|, dim(S), L(S), LCard(S), and F(S) the number of examples, the number of features (dimensions), the number of distinct class labels, the average number of labels associated with each example, and feature type, respectively. Table 1 summarizes the detailed characteristics of these datasets, which are organized in ascending order of |S|.
Dataset Splits Yes For performance evaluation, 10-fold cross-validation is conducted on these datasets, where mean metric values with standard deviations are recorded. All of these parameters are decided by conducting 5-fold crossvalidation on training set.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper mentions 'BR, ECC, and RAKEL are implemented under the MULAN multi-label learning package (Tsoumakas et al. 2011) by using the logistic regression model as the base classifier,' but it does not specify version numbers for MULAN or any other software dependencies.
Experiment Setup Yes For the proposed approach CAMEL, λ1 is empirically set to 1, λ2 is chosen from {10 3, 2 10 3, 10 2, 2 10 2, , 100}, and α is chosen from {0, 0.1, , 1}. All of these parameters are decided by conducting 5-fold crossvalidation on training set. For CAMEL, Gaussian kernel function K(xi, xj) = exp( xi xj 2 2 /(2σ2)) is employed with σ set to the average Euclidean distance of all pairs of training instances.