Positive Unlabeled Learning with Class-prior Approximation
Authors: Shizhen Chang, Bo Du, Liangpei Zhang
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we systematically evaluate the effectiveness of the proposed CAPU method compared with other state-of-the art PU methods in a synthetic dataset and real-world datasets taken from UCI Machine Learning Repository. |
| Researcher Affiliation | Academia | Shizhen Chang , Bo Du and Liangpei Zhang School of Computer Science, State Key Lab of Information Engineering on Survey Mapping and Remote Sensing, Institute of Artificial Intelligence, National Engineering Research Center for Multimedia Software, Wuhan University {szchang, dubo, zlp62}@whu.edu.cn |
| Pseudocode | Yes | Algorithm 1 The optimization process of the proposed model |
| Open Source Code | No | The paper provides links to the code for *comparable methods* (EN, PE, KM, TIc E, UPU, USMO) in footnotes, but does not provide a link or explicit statement about releasing the source code for their own proposed CAPU method. |
| Open Datasets | Yes | Real-world Datasets. We utilize four real-world datasets downloaded from the UCI Machine Learning Repository to evaluate the performance of our proposed algorithm. |
| Dataset Splits | No | The paper describes how positive and unlabeled samples are created and their counts, but it does not specify explicit training, validation, and test dataset splits for model development and evaluation. |
| Hardware Specification | No | The numerical calculations in this paper have been done on the supercomputing system in the Supercomputing Center of Wuhan University. This is a general statement about the computing environment but lacks specific hardware details like CPU/GPU models or memory. |
| Software Dependencies | No | The paper does not provide specific software dependencies, such as programming languages or library versions (e.g., Python 3.x, PyTorch 1.x), that would be necessary to replicate the experiments. |
| Experiment Setup | Yes | There are three parameters included in our CAPU model: the width σ of the RBF kernel, and the trade off parameters λ and β. [...] The performance of our CAPU model is best when the kernel width σ = 1. [...] Algorithm 1 ... Parameter: The width of Gaussian kernel σ, hyperparameters λ and β and threshold µ = 1/ min(np, nu) Constants: ϵ = 0.04, θmax = 10. |