Predictive Adversarial Learning from Positive and Unlabeled Data

Authors: Wenpeng Hu, Ran Le, Bing Liu, Feng Ji, Jinwen Ma, Dongyan Zhao, Rui Yan7806-7814

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluation using both image and text data shows that PAN outperforms state-of-the-art PU learning methods and also a direct adaptation of GAN for PU learning.
Researcher Affiliation Collaboration Wenpeng Hu1,*, Ran Le2,*, Bing Liu3, , Feng Ji4, Jinwen Ma1, Dongyan Zhao2, Rui Yan2, 1 Department of Information Science, School of Mathematical Sciences, Peking University 2 Wangxuan Institute of Computer Technology, Peking University 3 Department of Computer Science, University of Illinois at Chicago 4 Alibaba Group {wenpeng.hu, leran, jwma, zhaody, ruiyan}@pku.edu.cn, liub@uic.edu, zhongxiu.jf@alibaba-inc.com
Pseudocode Yes Algorithm 1 PAN training by the minibatch stochastic gradient descent method.
Open Source Code No The paper does not provide concrete access to source code for the PAN methodology described in this paper. It only references open-source code for baselines.
Open Datasets Yes YELP: http://www.yelp.com/dataset challenge; RT: http://www.cs.cornell.edu/ people/pabo/movie-review-data/; IMDB: https://www.imdb.com/interfaces/; 20NEWS: http://qwone.com/ jason/20Newsgroups/; MNIST: http://yann.lecun.com/ exdb/mnist/; CIFAR10: http://www.cs.toronto.edu/ kriz/cifar10-python.tar.gz .
Dataset Splits No The paper describes how training data (Positive P and Unlabeled U) and test data are prepared, but does not explicitly mention a separate validation dataset split.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments, only general training configurations.
Software Dependencies No The paper mentions using 'Adam algorithm' for optimization and 'tensorflow' for a baseline, but does not specify version numbers for programming languages, libraries, or frameworks used in the experiments.
Experiment Setup Yes Training Details: For a fair comparison, PAN uses the same architecture for classifier C( ) as NNPU. For text, a 2-layer convolutional network (CNN), with 5 * 100 and 3 * 100 convolutions for layers 1 and 2 respectively, and 100 filters for each layer, is used as the classifier C( ) and discriminator D( ). (...) We set λ in Eq. 3 and Eq. 7 to 0.0001, (...) We also balance the impact of positive and unlabeled data for term I in Eq. 3 in training; otherwise the positive examples will be dominated by the unlabeled data. We use 1:1 ratio of positive data and unlabeled data in each mini-batch in training. The network parameters are updated using the Adam algorithm with learning rate 0.0001. For a-GAN, it needs pre-training of D( ). We use the original positive and unlabeled (regarded as negative) data to pre-train D( ) in order to give it the ability to classify positive and unlabeled data. We pre-train D( ) 3 epochs for each dataset.