Exploring Binary Classification Hidden within Partial Label Learning

Authors: Hengheng Luo, Yabin Zhang, Suyun Zhao, Hong Chen, Cuiping Li

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we verify the effectiveness of the proposed algorithm with extensive experiments on synthetic datasets and real-world datasets respectively. The best results among all methods are highlighted in bold and we use to represent that the proposed method is significantly better than the other baselines by using paired t-test at 5% significance level.
Researcher Affiliation Academia Hengheng Luo1 , Yabin Zhang2 , Suyun Zhao1 , Hong Chen1 and Cuiping Li1 1Key Lab of Data Engineering and Knowledge Engineering of MOE Renmin University of China 2Gaoling School of Artificial Intelligence, Renmin University of China, Beijing 100872, China
Pseudocode Yes Algorithm 1 PLL via Partial Binary Classification Loss
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes We present experimental results on three widely-used benchmark datasets, i.e., MNIST [Le Cun et al., 1998], Fashion [Xiao et al., 2017] and Kuzushiji [Clanuwat et al., 2018]... Furthermore, five real-world datasets are used from various application domains, including Lost [Cour et al., 2009], MSRCv2 [Liu and Dietterich, 2012], Bird Song [Briggs et al., 2012], Soccer Player [Zeng et al., 2013] and Yahoo! News [Guillaumin et al., 2010].
Dataset Splits Yes For our method, the best hyperparameters are selected through grid search on a validation set... Means and standard deviations of each baseline are measured over 10-fold cross-validation, as shown in Table 2.
Hardware Specification Yes The implementations are based on Py Torch [Paszke et al., 2019] and experiments are conducted with NVIDIA RTX 2080 Ti GPUs.
Software Dependencies No The implementations are based on Py Torch [Paszke et al., 2019]. While PyTorch is mentioned as the base for implementations, a specific version number for PyTorch itself is not provided.
Experiment Setup Yes For our method, the best hyperparameters are selected through grid search on a validation set, where learning rate lr {0.001, 0.005, 0.01, 0.05, 0.1}, weight decay wd 10 5, . . . , 10 3 , and the temperature coefficient λ {0.5, . . . 1.0}, with the learning rate decays halved per 50 epochs. For other methods, all hyper-parameters are searched according to the suggested parameter settings. We employ two base models, including linear model and 5-layer perceptron (MLP), and use SGD as the optimizer with a momentum of 0.9. The number of epoch is set as 250 and the mini-batch size is set as 256 for synthetic datasets and 128 for real-world datasets respectively.