Who Is Your Right Mixup Partner in Positive and Unlabeled Learning
Authors: Changchun Li, Ximing Li, Lei Feng, Jihong Ouyang
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experimental results demonstrate the effectiveness of the heuristic mixup technique in PU learning and show that P3Mix can consistently outperform the state-of-the-art PU learning methods. |
| Researcher Affiliation | Academia | 1College of Computer Science and Technology, Jilin University, China 2College of Computer Science, Chongqing University, China 3Imperfect Information Learning Team, RIKEN Center for Advanced Intelligence Project, Japan |
| Pseudocode | Yes | Algorithm 1 Training procedure of P3Mix, P3Mix-E and P3Mix-C |
| Open Source Code | No | The paper provides links to the code for various baseline methods (e.g., nn PU, Self-PU, PAN, VPU, MIXPUL) in the appendix, but it does not provide an explicit link or statement about the availability of the source code for their proposed P3Mix method. |
| Open Datasets | Yes | In the experiments, we employ three prevalent benchmark datasets, including Fashion MNIST (F-MNIST) (Xiao et al., 2017),3 CIFAR-10 (Krizhevsky, 2016),4 and STL-10 (Coates et al., 2011).5 |
| Dataset Splits | Yes | For each dataset, we randomly select 1,000 positive instances from the training set, and 500 instances as the validation set. |
| Hardware Specification | No | The paper mentions implementing P3Mix using Pytorch and Adam algorithm but does not specify any hardware details like GPU models, CPU types, or cloud computing resources used for experiments. |
| Software Dependencies | No | The paper states: 'We implement P3Mix, P3Mix-E and P3Mix-C by using Pytorch (Paszke et al., 2019) with the Adam algorithm (Kingma & Ba, 2014).' While PyTorch is mentioned with a citation year, a specific version number like 'PyTorch 1.9' is not provided. Adam is an algorithm, not a software dependency with a version. |
| Experiment Setup | Yes | We employ the cross entropy function as the loss function ℓof Eq.(2), fix the mixup hyperparameter α to 1 and the size k of the candidate mixup pool Xcnd to 100, and choose the coefficient parameter β from {0.8, 0.9, 1.0}, the thresholding parameter γ from {0.85, 0.9, 0.95}. ... Specially, the early-learning regularization parameter of P3Mix-E is chosen from {1.0, 2.0, 3.0, 4.0, 5.0}. |