Classification from Positive, Unlabeled and Biased Negative Data
Authors: Yu-Guan Hsieh, Gang Niu, Masashi Sugiyama
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we experimentally investigate the proposed method and compare its performance against several baseline methods. We assess the performance of the proposed method on three benchmark datasets: MNIST, CIFAR-10 and 20 Newsgroups. |
| Researcher Affiliation | Academia | 1 Ecole Normale Sup erieure, Paris, France 2RIKEN, Tokyo, Japan 3The University of Tokyo, Tokyo, Japan. |
| Pseudocode | Yes | Algorithm 1 PUb N Classification |
| Open Source Code | No | The paper does not contain any explicit statement about providing open-source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We assess the performance of the proposed method on three benchmark datasets: MNIST, CIFAR-10 and 20 Newsgroups. |
| Dataset Splits | Yes | In all the experiments, an additional validation set, equally composed of P, U and b N data, is sampled for both hyperparameter tuning and choosing the model parameters with the lowest validation loss among those obtained after every epoch. |
| Hardware Specification | No | The paper does not contain any specific hardware details (e.g., GPU/CPU models, memory amounts) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'AMSGrad' as the optimizer and 'logistic loss' but does not provide specific version numbers for any software, libraries, or frameworks used (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | All models are learned using AMSGrad (Reddi et al., 2018) as the optimizer and the logistic loss as the surrogate loss unless otherwise specified. In all the experiments, an additional validation set, equally composed of P, U and b N data, is sampled for both hyperparameter tuning and choosing the model parameters with the lowest validation loss among those obtained after every epoch. To recapitulate, for the three datasets we respectively use a 4-layer Conv Net, Pre Act Res Net-18 (He et al., 2016) and a 3-layer fully connected neural network. Both u PU and nn PU are learned with the sigmoid loss, learning rate 10 3 for MNIST and initial learning rate 10 4 for CIFAR-10. |