Progressive Identification of True Labels for Partial-Label Learning
Authors: Jiaqi Lv, Miao Xu, Lei Feng, Gang Niu, Xin Geng, Masashi Sugiyama
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Thorough experiments demonstrate it sets the new state of the art. In this section, we experimentally analyze the proposed method PRODEN, and compare with state-of-the-art PLL methods. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Southeast University, Nanjing, China 2RIKEN Center for Advanced Intelligence Project, Tokyo, Japan 3The University of Queensland, Australia 4School of Computer Science and Engineering, Nanyang Technological University, Singapore 5University of Tokyo, Tokyo, Japan. |
| Pseudocode | Yes | Algorithm 1 PRODEN Algorithm input D: the partial-label training set {(xi, si)}n i=1, T: number of epochs; output Θ: model parameter for g(x; Θ) |
| Open Source Code | Yes | The implementation is based on Py Torch (Paszke et al., 2019) and experiments were carried out with NVIDIA Tesla V100 GPU; it is available at https://github. com/Lvcrezia77/PRODEN. |
| Open Datasets | Yes | We use four widely used benchmark datasets including MNIST (Le Cun et al., 1998), Fashion-MNIST (H. Xiao & Vollgraf, 2017), Kuzushiji-MNIST (Clanuwat et al., 2018), and CIFAR-10 (Krizhevsky & Hinton, 2009), and five datasets from the UCI Machine Learning Repository (Krizhevsky & Hinton, 2009) |
| Dataset Splits | No | The paper mentions 'test set' and 'training examples' but does not explicitly specify a validation set split (e.g., '70% training, 15% validation, 15% test'). While it mentions 'five-fold cross-validation' for UCI datasets, this typically refers to a robust evaluation method rather than a distinct train/validation/test split for a single model run. |
| Hardware Specification | Yes | The implementation is based on Py Torch (Paszke et al., 2019) and experiments were carried out with NVIDIA Tesla V100 GPU |
| Software Dependencies | No | The paper mentions 'Py Torch (Paszke et al., 2019)' but does not explicitly provide a version number for PyTorch or any other software dependency. |
| Experiment Setup | Yes | The optimizer is stochastic gradient descent (SGD) (Robbins & Monro, 1951) with momentum 0.9. We train each model 500 epochs with softmax function and cross-entropy loss. And PRODEN-itera updates label weights every 100 epochs. |