Theoretical Comparisons of Positive-Unlabeled Learning against Positive-Negative Learning

Authors: Gang Niu, Marthinus Christoffel du Plessis, Tomoya Sakai, Yao Ma, Masashi Sugiyama

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our theoretical findings well agree with the experimental results on artificial and benchmark data even when the experimental setup does not match the theoretical assumptions exactly. In this section, we experimentally validate our theoretical findings.
Researcher Affiliation Academia 1The University of Tokyo, Japan 2RIKEN, Japan 3Boston University, USA
Pseudocode No The paper describes methods mathematically but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code for the methodology or a link to a code repository.
Open Datasets Yes Benchmark data Table 2 summarizes the specification of benchmarks, which were downloaded from many sources including the IDA benchmark repository [29], the UCI machine learning repository, the semi-supervised learning book [30], and the European ESPRIT 5516 project.3 and 3See http://www.raetschlab.org/Members/raetsch/benchmark/ for IDA, http://archive.ics. uci.edu/ml/ for UCI, http://olivier.chapelle.cc/ssl-book/ for the SSL book and https://www. elen.ucl.ac.be/neural-nets/Research/Projects/ELENA/ for the ELENA project.
Dataset Splits No The paper mentions using "five-fold cross-validation" for parameter selection, but does not specify a general training/validation/test dataset split with explicit percentages or sample counts for the main experiment.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU/CPU models, memory, or cloud instance types.
Software Dependencies No The paper mentions that the solver comes from a previous work [7] and refers to optimization techniques [27, 28], but it does not specify any software dependencies with version numbers.
Experiment Setup Yes The model g(x) = w, x + b where w R2, b R and the scaled ramp loss ℓsr are employed. In addition, an ℓ2-regularization is added with the regularization parameter fixed to 10^3, and In (a)(b), n+ = 45, n = 5, π = 0.5, and nu varies from 5 to 200; in (c)(d), n+ = 45, n = 5, nu = 100, and π varies from 0.05 to 0.95.