Exploiting Unlabeled Data via Partial Label Assignment for Multi-Class Semi-Supervised Learning

Authors: Zhen-Ru Zhang, Qian-Wen Zhang, Yunbo Cao, Min-Ling Zhang10973-10980

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comparative studies against state-of-the-art approaches clearly show the effectiveness of the proposed unlabeled data exploitation strategy for multi-class semi-supervised learning.
Researcher Affiliation Collaboration Zhen-Ru Zhang1,2,3, Qian-Wen Zhang3, Yunbo Cao3, Min-Ling Zhang1,2,4* 1 School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 2 Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, China 3 Tencent Cloud Xiaowei, Beijing, China 4 Collaborative Innovation Center of Wireless Communications Technology, China zhangzr@seu.edu.cn, {cowenzhang, yunbocao}@tencent.com, zhangml@seu.edu.cn
Pseudocode Yes Table 1: The pseudo-code of EUPAL.
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of its methodology.
Open Datasets Yes In this paper, a total of 13 benchmark multi-class data sets (Dua and Graff 2017) have been employed for experimental studies whose characteristics are summarized in Table 2.
Dataset Splits No The paper defines training and test sets but does not explicitly mention a 'validation' set or its split for hyperparameter tuning.
Hardware Specification No The paper does not provide specific details about the hardware used to run experiments, such as GPU/CPU models or memory.
Software Dependencies No The paper mentions software like LIBSVM and IPAL as being used but does not provide specific version numbers for them or any other software dependencies.
Experiment Setup Yes As shown in Table 1, the values of k (number of nearest neighbors), α (balancing parameter), and T (maximum number of iterations) for EUPAL are set to be 5, 0.4 and 50 respectively. Furthermore, the supervised training algorithm L and the partial label training algorithm P are instantiated with LIBSVM and IPAL accordingly.