Online Positive and Unlabeled Learning
Authors: Chuang Zhang, Chen Gong, Tengfei Liu, Xun Lu, Weiqiang Wang, Jian Yang
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we conduct intensive experiments on both benchmark and real-world datasets, and the results clearly demonstrate the effectiveness of the proposed method. |
| Researcher Affiliation | Collaboration | Chuang Zhang1 , Chen Gong 1,3 , Tengfei Liu4 , Xun Lu4 , Weiqiang Wang4 and Jian Yang1,2 1PCA Lab, the Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, School of Computer Science and Engineering, Nanjing University of Science and Technology, China 2Jiangsu Key Lab of Image and Video Understanding for Social Security 3The Department of Computing, Hong Kong Polytechnic University 4Ant Financial Services Group |
| Pseudocode | Yes | Algorithm 1 Basic OPU with single Coming Datum |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We conduct experiments on a variety of benchmark datasets from Open ML machine learning repositories1. To be specific, four binary datasets are adopted for algorithm evaluation including Vote, Australian, Mushroom, and Phishing... Here, we investigate the performance of the compared methods on image classification tasks. Concretely, CIFAR10 [Krizhevsky and Hinton, 2009] and SVHN [Netzer et al., 2011] are chosen to evaluate their performance. |
| Dataset Splits | Yes | For each dataset, we randomly choose r = 20%, 30%, and 40% positive examples as well as all negative examples as unlabeled and leave the rest positive examples as labeled. Under each r, we conduct five-fold cross validation on every compared method and report the average accuracy and standard deviation over the five independent implementations. As a result, each model under a certain implementation is trained with 80% examples and then tested on the rest 20% examples. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | The parameters of every algorithm have been carefully tuned on the validation set to achieve the best performance. In our OPU, we choose the regularization parameter λ from {10 6, . . . , 102}. For UPU, the regularization parameter λ is chosen from {10 3, . . . , 101}. |