Analysis of Learning from Positive and Unlabeled Data
Authors: Marthinus C du Plessis, Gang Niu, Masashi Sugiyama
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we numerically illustrate the above theoretical findings through experiments. In this section, the experimentally compare the performance of the ramp loss and the hinge loss in PU classification (weighting was performed w.r.t. the true class prior and the ramp loss was optimized with [12]). We used the USPS dataset, with the dimensionality reduced to 2 via principal component analysis to enable illustration. 550 samples were used for the positive and mixture datasets. From the results in Table 1, it is clear that the ramp loss gives a much higher classification accuracy than the hinge loss, especially for large class priors. |
| Researcher Affiliation | Collaboration | Marthinus C. du Plessis The University of Tokyo Tokyo, 113-0033, Japan christo@ms.k.u-tokyo.ac.jp Gang Niu Baidu Inc. Beijing, 100085, China niugang@baidu.com Masashi Sugiyama The University of Tokyo Tokyo, 113-0033, Japan sugi@k.u-tokyo.ac.jp |
| Pseudocode | No | The paper describes algorithms conceptually but does not provide any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link to open-source code for the methodology described. |
| Open Datasets | Yes | We used the USPS dataset, with the dimensionality reduced to 2 via principal component analysis to enable illustration. |
| Dataset Splits | No | The paper mentions '550 samples were used for the positive and mixture datasets' but does not specify train/validation/test splits, percentages, or cross-validation setup. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., CPU, GPU models, memory, or cloud instances). |
| Software Dependencies | No | The paper mentions 'Some implementations of support vector machines, such as libsvm [6], allow for assigning weights to classes.' but does not specify a version number for libsvm or any other software dependencies. |
| Experiment Setup | No | The paper states 'weighting was performed w.r.t. the true class prior and the ramp loss was optimized with [12]' but lacks specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations. |