Learning with Partial Labels from Semi-supervised Perspective
Authors: Ximing Li, Yuanzhi Jiang, Changchun Li, Yiyuan Wang, Jihong Ouyang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results demonstrate that PLSP significantly outperforms the existing PL baseline methods, especially on high ambiguity levels. Code available: https://github.com/changchunli/PLSP. ... We conduct extensive experiments to evaluate the effectiveness of PLSP on benchmark datasets. |
| Researcher Affiliation | Academia | 1College of Computer Science and Technology, Jilin University, China 2Key Laboratory of Symbolic Computation and Knowledge Engineering of MOE, Jilin University, China 3College of Information Science and Technology, Northeast Normal University, China 4Key Laboratory of Applied Statistics of MOE, Northeast Normal University, China |
| Pseudocode | Yes | Algorithm 1: Training procedure of PLSP |
| Open Source Code | Yes | Code available: https://github.com/changchunli/PLSP. |
| Open Datasets | Yes | We utilize 3 widely used benchmark image datasets, including Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017), CIFAR-10 and CIFAR-100 (Krizhevsky 2016). |
| Dataset Splits | No | The paper mentions using Fashion-MNIST, CIFAR-10, and CIFAR-100 datasets but does not explicitly provide the specific percentages or counts for training, validation, and test splits used in their experiments. It only describes how partially labeled versions were synthesized. |
| Hardware Specification | Yes | All experiments are carried on a Linux server with one NVIDIA Ge Force RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions using 'Scikit-Learn tools' but does not provide any specific version numbers for software dependencies. |
| Experiment Setup | Yes | For all baselines and the pretraining-stage of PLSP, we set the batch size 256 for Fashion MNIST and CIFAR-10, and 64 for CIFAR-100. For PLSP, we use the following hyper-parameter settings: γ0 = 1.0, λ0 = 0.01, τ0 = 0.75, number of pre-training epoches T0 = 10, number of SS training epoches T = 250, number of inner loops I = 200, batch sizes of pseudo-labeled and pseudo-unlabeled instances Bl = 64, Bu = 256. Specially, for CIFAR-100 we set T0 = 50, I = 800, Bl = 16, Bu = 64. |