Multi-Label Co-Training

Authors: Yuying Xing, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Zili Zhang

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental MLCT performs favorably against related competitive multi-label learning methods on benchmark datasets and it is also robust to the input parameters.An extensive comparative study shows that MLCT performs favorably against the recently proposed COINs [Zhan and Zhang, 2017] and other representative multi-label learning methods (including ML-KNN [Zhang and Zhou, 2007], MLRGL[Bucak et al., 2011], MLLOC[Huang and Zhou, 2012]), BSSML[G onen, 2014] and SMILE[Tan et al., 2017]).
Researcher Affiliation Academia 1College of Computer and Information Science, Southwest University, Chongqing 400715, China 2Department of Computer Science, George Mason University, Fairfax 22030, USA 3School of Information Technology, Deakin University, Geelong, VIC 3220, Australia {yyxing4148, gxyu, kingjun, zhangzl}@swu.edu.cn, carlotta@cs.gmu.edu
Pseudocode Yes Algorithm 1 MLCT pseudo-code
Open Source Code Yes The code of MLCT is available at: http://mlda.swu.edu.cn/codes.php?name=MLCT.
Open Datasets Yes We assess the effectiveness of MLCT on four publicly accessible multi-label datasets from different domains, with different numbers of views and of samples [Gibaja et al., 2016; Guillaumin et al., 2010]. These datasets are described in Table 1.
Dataset Splits No To compute the performance of MLCT, we randomly partition samples of each dataset into a training set (70%) and a testing set (30%). For the training set, we again randomly select 10% samples as the initial labeled data (L) and the remaining as unlabeled data (U) for co-training.
Hardware Specification No The paper does not provide specific details about the hardware used for the experiments.
Software Dependencies No The paper mentions using ML-KNN as a specific multi-label classifier but does not provide details on software versions for this or other dependencies.
Experiment Setup Yes For co-training based methods, the maximum number of iterations (t) is fixed to 30, the number of samples (ua) in the buffer pool B is fixed to u/t , and the number of samples (ub) to be shared during the co-training process is fixed to 5%ua .