Provably Consistent Partial-Label Learning
Authors: Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, Masashi Sugiyama
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark and real-world datasets validate the effectiveness of the proposed generation model and two PLL methods. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Nanyang Technological University, Singapore 2School of Computer Science and Engineering, Southeast University, Nanjing, China 3Department of Computer Science, Hong Kong Baptist University, China 4The University of Queensland, Australia 5Center for Advanced Intelligence Project, RIKEN, Japan 6The University of Tokyo, Japan |
| Pseudocode | Yes | Algorithm 1 RC Algorithm; Algorithm 2 CC Algorithm |
| Open Source Code | No | The paper does not provide explicit links or statements about the availability of open-source code for the described methodology. |
| Open Datasets | Yes | We collect four widely used benchmark datasets including MNIST [38], Kuzushiji MNIST [12], Fashion-MNIST [63], and CIFAR-10 [37], and five datasets from the UCI Machine Learning Repository [37]. In addition, we also use five widely used real-world partially labeled datasets, including Lost [13], Bird Song [6], MSRCv2 [41], Soccer Player [69], Yahoo! News [24]. |
| Dataset Splits | Yes | Hyper-parameters are selected so as to maximize the accuracy on a validation set (10% of the training set) of partially labeled data. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory, etc.). |
| Software Dependencies | No | The paper mentions "Py Torch [56]" and "Adam [36]" but does not specify their version numbers, nor does it list multiple key software components with versions. |
| Experiment Setup | Yes | Hyper-parameters are selected so as to maximize the accuracy on a validation set (10% of the training set) of partially labeled data. We implement them using Py Torch [56] and use the Adam [36] optimizer with the mini-batch size set to 256 and the number of epochs set to 250. |