Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Convex Formulation for Learning from Positive and Unlabeled Data
Authors: Marthinus Du Plessis, Gang Niu, Masashi Sugiyama
ICML 2015 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Theoretically, we prove that the estimators converge to the optimal solutions at the optimal parametric rate. Experimentally, we demonstrate that PU classification with the double hinge loss performs as accurate as the non-convex method, with a much lower computational cost. |
| Researcher Affiliation | Collaboration | Marthinus Christoffel du Plessis EMAIL The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan, Gang Niu EMAIL Baidu Inc., Beijing, 100085, China, Masashi Sugiyama EMAIL The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan |
| Pseudocode | No | The paper describes mathematical formulations and optimization problems but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an unambiguous statement about releasing the source code for the described methodology, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | We performed experiments illustrating the method on the MNIST dataset. |
| Dataset Splits | Yes | Hyperparameters were selected via cross-validation on the zero-one loss objective in Eq. (2). |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | Using a model g(x) = wx + b, and choosing λ = 10 3 we plot the value of the objective function w.r.t. to w and b in Fig. 2. Hyperparameters were selected via cross-validation on the zero-one loss objective in Eq. (2). |