Progressive Purification for Instance-Dependent Partial Label Learning

Authors: Ning Xu, Biao Liu, Jiaqi Lv, Congyu Qiao, Xin Geng

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the benchmark datasets and the realworld datasets validate the effectiveness of the proposed method.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Southeast University, Nanjing, China. E-mail: {xning, liubiao01, qiaocy, xgeng}@seu.edu.cn. 2RIKEN Center for Advanced Intelligence Project, Tokyo 103-0027, Japan. E-mail: is.jiaqi.lv@gmail.com.
Pseudocode Yes Algorithm 1 POP Algorithm
Open Source Code Yes Source code is available at https://github.com/palm-ml/POP.
Open Datasets Yes We adopt five widely used benchmark datasets including MNIST (Le Cun et al., 1998), Kuzushiji-MNIST (Clanuwat et al., 2018), Fashion-MNIST (Xiao et al., 2017), CIFAR-10 (Krizhevsky & Hinton, 2009), CIFAR-100 (Krizhevsky & Hinton, 2009). In addition, seven real-world PLL datasets which are collected from different application domains are used, including Lost (Cour et al., 2011), Soccer Player (Zeng et al., 2013), Yahoo!News (Guillaumin et al., 2010) from automatic face naming, MSRCv2 (Liu & Dietterich, 2012) from object classification, Malagasy (Garrette & Baldridge, 2013) from POS tagging, Mirflickr (Huiskes & Lew, 2008) from web image classification, and Bird Song (Briggs et al., 2012) from bird song classification.
Dataset Splits Yes The hyper-parameters of the deep models are selected so as to maximize the accuracy on a validation set (10% of the training set).
Hardware Specification No The paper mentions models used (e.g., '5-layer Le Net', 'Wide-Res Net-28-2') and software ('Py Torch') but does not specify any hardware details such as GPU models, CPU types, or memory used for experiments.
Software Dependencies No The paper mentions 'Py Torch' as the implementation framework but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes We set e0 = 0.9, eend = 0.1 and es = 0.01. We run 5 trials on the benchmark datasets and the real-world PLL datasets.