Positive and Unlabeled Learning with Label Disambiguation

Authors: Chuang Zhang, Dexin Ren, Tongliang Liu, Jian Yang, Chen Gong

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, we conduct intensive experiments on both benchmark and real-world datasets, and the results clearly demonstrate the superiority of the proposed PULD to the existing PU learning approaches. 4 Experiments To demonstrate the superiority of our proposed PULD to the existing PU methods, we perform intensive experiments on both benchmark and real-world datasets in this section.
Researcher Affiliation Collaboration Chuang Zhang1 , Dexin Ren1 , Tongliang Liu3 , Jian Yang1,2 and Chen Gong1 1PCA Lab, the Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, School of Computer Science and Engineering, Nanjing University of Science and Technology, China 2Jiangsu Key Lab of Image and Video Understanding for Social Security 3UBTECH Sydney AI Centre, SCS, FEIT, The University of Sydney, Australia
Pseudocode Yes Algorithm 1 The algorithm for solving PULD
Open Source Code No The paper does not provide an explicit statement or a link indicating the release of source code for the described methodology.
Open Datasets Yes Specifically, five binary datasets are adopted for algorithm evaluation including vote, diabetes, wdbc, fri, and phishing, and their configurations are listed in Table 1. ... 1https://www.openml.org/ ... CIFAR-10 [Krizhevsky and Hinton, 2009] and SVHN [Netzer et al., 2011] datasets are chosen to test their performance.
Dataset Splits Yes Under each r, we conduct five-fold cross validation on every compared method and report the average accuracy over the five independent implementations. ... the training set and the test set are split in advance with 50000 training examples and 10000 test examples for CIFAR-10, and 73257 training examples and 26032 test examples for SVHN.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., Python 3.8, PyTorch 1.9) required to replicate the experiment.
Experiment Setup Yes For LDCE, we choose regularization parameter λ from {2 4, . . . , 24} and β from {0.1, 0.2, . . . , 0.9} according to [Shi et al., 2018]. Moreover, for the proposed PULD, K is chosen from the candidate set {6, 8, 10, 12, 14}, δ is chosen from {10 2, . . . , 101}, and the trade-off parameters µ, γ, and λ in (10) are turned by searching the grid {10 4, . . . , 101}.