Convex Formulation for Learning from Positive and Unlabeled Data

Authors: Marthinus Du Plessis, Gang Niu, Masashi Sugiyama

ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Theoretically, we prove that the estimators converge to the optimal solutions at the optimal parametric rate. Experimentally, we demonstrate that PU classification with the double hinge loss performs as accurate as the non-convex method, with a much lower computational cost.
Researcher Affiliation Collaboration Marthinus Christoffel du Plessis CHRISTO@MS.K.U-TOKYO.AC.JP The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan, Gang Niu NIUGANG@BAIDU.CN Baidu Inc., Beijing, 100085, China, Masashi Sugiyama SUGI@K.U-TOKYO.AC.JP The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
Pseudocode No The paper describes mathematical formulations and optimization problems but does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include an unambiguous statement about releasing the source code for the described methodology, nor does it provide a direct link to a code repository.
Open Datasets Yes We performed experiments illustrating the method on the MNIST dataset.
Dataset Splits Yes Hyperparameters were selected via cross-validation on the zero-one loss objective in Eq. (2).
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes Using a model g(x) = wx + b, and choosing λ = 10 3 we plot the value of the objective function w.r.t. to w and b in Fig. 2. Hyperparameters were selected via cross-validation on the zero-one loss objective in Eq. (2).