Dual Set Multi-Label Learning

Authors: Chong Liu, Peng Zhao, Sheng-Jun Huang, Yuan Jiang, Zhi-Hua Zhou

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To empirically evaluate the performance of our approach, we conduct experiments on two manually collected real-world datasets along with an adapted dataset. Experimental results validate the effectiveness of our approach for dual set multi-label learning.
Researcher Affiliation Academia Chong Liu,1,2 Peng Zhao,1,2 Sheng-Jun Huang,3 Yuan Jiang,1,2 Zhi-Hua Zhou1,2 1 National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China 2 Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing 210023, China 3 College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China {liuc, zhaop, jiangy, zhouzh}@lamda.nju.edu.cn, huangsj@nuaa.edu.cn
Pseudocode Yes Algorithm 1 The DSML algorithm
Open Source Code No The paper does not provide an explicit statement about the availability of source code or a link to a code repository for the described methodology.
Open Datasets No We manually collect two real-world datasets and adapt one publicly available dataset for dual set multi-label learning. Details of them can be found in a longer version. The paper names the datasets in Table 1 but does not provide any access information (link, DOI, or formal citation for the original source of the adapted dataset).
Dataset Splits Yes In this part, all algorithms are evaluated on the same five-fold partition of the same datasets.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU/CPU models or memory specifications.
Software Dependencies No The paper mentions several algorithms and base classifiers (e.g., RBF neural networks, ML-KNN, ML-RBF, BP-MLL, Rank SVM, Ada Boost) but does not provide specific version numbers for any software dependencies or libraries used for implementation.
Experiment Setup Yes For DSML, the number of boosting rounds T is set to 10, and B is set to 1.05. Moreover, since dual set multi-label learning is a specific case of general multi-label learning, traditional multi-label algorithms can be used for this case. Four of these algorithms are compared, which are ML-KNN (Zhang and Zhou 2007), ML-RBF (Zhang 2009), BP-MLL (Zhang and Zhou 2006), and Rank SVM (Elisseeff and Weston 2002). For these methods, hyper-parameters are set according to the suggestions given by their papers.