Generative Adversarial Positive-Unlabelled Learning
Authors: Ming Hou, Brahim Chaib-draa, Chao Li, Qibin Zhao
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | experiments on both synthetic and realworld dataset demonstrate the effectiveness of our proposed framework. With infinite realistic and diverse samples generated from Gen PU, a very flexible classifier can then be trained using deep neural networks. |
| Researcher Affiliation | Academia | 1 Tensor Learning Unit, Center for Advanced Intelligence Project, RIKEN, Japan 2 Department of Computer Science and Software Engineering, Laval University, Canada 3 Causal Inference Team, Center for Advanced Intelligence Project, RIKEN, Japan 4 School of Automation, Guangdong University of Technology, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions a link to third-party code ('The software codes for UPU and NNPU are downloaded from https://github.com/kiryor/nnPUlearning') but does not provide concrete access to its own (Gen PU) source code. |
| Open Datasets | Yes | Next, the evaluation is carried out on MNIST [Le Cun et al., 1998] and USPS [Le Cun et al., 1990] datasets. For real data, the approaches including oracle PN, unbiased PU (UPU) [Du Plessis et al., 2015], non-negative PU (NNPU) [Kiryo et al., 2017] 1 are selected for comparison. This set, the data is taken from Celeb A dataset [Liu et al., 2015] and resized to 64 64. |
| Dataset Splits | Yes | The training set contains 5000 positive and 5000 negative samples, which are then partitioned into 500 positively labelled and 9500 unlabeled samples. Regarding the weights of Gen PU, for simplicity, we set λu = 1 and freely tune λp and λn on the validation set via grid search over the range like [... 0.01, 0.02, ... 0.1, 0.2, ..., 1, 2, ....]. the first 20, 000 male and 20, 000 female faces in Celeb A are chosen as training set and the last 1, 000 faces are used as test set. Then, 2, 000 out of 20, 000 male faces are are randomly selected as positively labeled data. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper specifies network architectures and optimization details (e.g., 'Adam(0.9, 0.999)') but does not provide specific version numbers for software dependencies like programming languages or deep learning frameworks (e.g., PyTorch, TensorFlow). |
| Experiment Setup | Yes | Table 1: Specifications of network architecture and hyperparameters for USPS/MNIST dataset. Gp(z), Gn(z): z N (0, I) 100 fully connected 256 leaky relu fully connected 256 leaky relu fully connected 256/784 tanh Dp(x), Dn(x) 256/784 fully connected 1 sigmoid Du(x) 256/784 fully connected 256 leaky relu fully connected 256 leaky relu fully connected 1 sigmoid leaky relu slope 0.2 mini-batch size for Xp, Xu 50, 100 learning rate 0.0003 optimizer Adam(0.9, 0.999) weight, bias initialization 0, 0 |