Rethinking Class-Prior Estimation for Positive-Unlabeled Learning
Authors: Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Gang Niu, Masashi Sugiyama, Dacheng Tao
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We run experiments on 2 synthetic datasets and 9 real word datasets |
| Researcher Affiliation | Collaboration | 1The University of Sydney 2Hong Kong Baptist University 3The University of Melbourne 4RIKEN AIP 5The University of Tokyo 6JD Explore Academy, China |
| Pseudocode | Yes | Algorithm 1 Re CPE |
| Open Source Code | Yes | We have also included an anonymous source code in our supplementary material. |
| Open Datasets | Yes | The real-world datasets are downloaded from the UCL machine learning database. Multi-class datasets are used as binary datasets by either grouping or ignoring classes. |
| Dataset Splits | Yes | We sample the validation set with 20% of the training data size. |
| Hardware Specification | No | The paper mentions training a neural network but does not specify any hardware details such as GPU/CPU models or specific computing resources used for the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names like PyTorch or TensorFlow with their versions). |
| Experiment Setup | Yes | For all experiments, we employ a neural network with 2 hidden layers. Each hidden layer contains 50 hidden units. The batch normalization (Ioffe & Szegedy, 2015) is also employed. The stochastic gradient descent optimizer is used with the batch size 50. The network is trained for 350 epochs with a learning rate 0.01 and momentum 0. The weight decay is set to 1e 5. |