Label-Sensitive Task Grouping by Bayesian Nonparametric Approach for Multi-Task Multi-Label Learning
Authors: Xiao Zhang, Wenzhong Li, Vu Nguyen, Fuzhen Zhuang, Hui Xiong, Sanglu Lu
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the model performance on three public multi-task multi-label data sets, and the results show that LABTAG outperforms the compared baselines with a significant margin. 4 Experimental Evaluation |
| Researcher Affiliation | Academia | 1 State Key Laboratory for Novel Software Technology, Nanjing University, China 2 Center for Pattern Recognition and Data Analytics, Deakin University, Australia 3 Key Lab of IIP of CAS, Institute of Computing Technology, CAS Beijing, China 4 Management Science & Information Systems, Rutgers University, USA |
| Pseudocode | Yes | Algorithm 1 The generative process of the proposed LABTAG model |
| Open Source Code | No | The paper mentions utilizing and downloading code for baseline models ('MEKA toolbox', 'Matlab code from the author s website for BNMC and ML-KNN'), but does not provide concrete access to the source code for the proposed LABTAG model. |
| Open Datasets | Yes | We use three public data sets for the performance evaluation, including Trip Advisor, LDOS-Co Mo Da and Enron Corpus data sets... Trip Advisor and LDOS-Co Mo Da are two context-aware data sets, which are used for context recommendation systems [Zheng et al., 2014]... Finally, the Enron Email Corpus contains email information (email content and recipients) from Enron [Klimt and Yang, 2004; Carvalho and Cohen, 2007]. |
| Dataset Splits | Yes | In each task, 50% data are used for training and the remaining 50% for test. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software like 'MEKA toolbox' and 'Matlab code' used for baselines, but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | The hyper-parameter settings of LABTAG model are as follows: α = 1, δ = 0.01, ϖ = 0.07, β = 1, µ0 = 0, Σ0 = 10I. The truncation threshold is set as 0.001 #Train and the learning rate is set as 0.01. |