Mutual Partial Label Learning with Competitive Label Noise

Authors: Yan Yan, Yuhong Guo

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on several benchmark PLL datasets, and the proposed ML-PLL approach demonstrates state-of-the-art performance for partial label learning.
Researcher Affiliation Academia Yan Yan1, Yuhong Guo1,2 1Carleton University, Ottawa, Canada 2CIFAR AI Chair, Amii, Canada yanyan@cunet.carleton.ca, yuhong.guo@carleton.ca
Pseudocode Yes We present the mini-batch based training algorithm for ML-PLL in Algorithm 1. Algorithm 1 Training Algorithm for ML-PLL.
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the methodology is openly available.
Open Datasets Yes We conducted experiments on four widely used benchmark image datasets: Fashion MNIST (Xiao et al., 2017), Kuzushiji-MNIST (Clanuwat et al., 2018), CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009).
Dataset Splits Yes The parameter α and β are chosen from {0.5, 0.6, 0.7, 0.8, 0.9, 1} and {1, 2, 3, 4, 5, 6}, respectively, according to the accuracy on a validation dataset (10% of the training dataset).
Hardware Specification No The paper does not specify the hardware used for experiments, such as CPU or GPU models, or cloud computing resources.
Software Dependencies No The paper mentions using a "standard SGD optimizer" but does not provide specific version numbers for any software dependencies, libraries, or programming languages.
Experiment Setup Yes The weighted combination parameter λ in Eq.(1), the temperature parameter τ in Eq.(2), and the momentum coefficients in Eq.(5) and Eq.(9) are set to 0.99, 1, 0.999, and 0.99 respectively. In all the experiments, we utilize a standard SGD optimizer with a momentum of 0.9 and a weight decay of 1e-3 for model training. The mini-batch size, learning rate and total training epochs are set to 128, 0.01 and 400 respectively. The parameter α and β are chosen from {0.5, 0.6, 0.7, 0.8, 0.9, 1} and {1, 2, 3, 4, 5, 6}, respectively, according to the accuracy on a validation dataset (10% of the training dataset).