Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification
Authors: Nan Lu, Shida Lei, Gang Niu, Issei Sato, Masashi Sugiyama
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experiments, we demonstrate the superiority of our proposed method over state-of-the-art methods. |
| Researcher Affiliation | Academia | 1The University of Tokyo, Tokyo, Japan 2RIKEN, Tokyo, Japan. |
| Pseudocode | Yes | Algorithm 1 Um-SSC based on stochastic optimization |
| Open Source Code | Yes | Our implementation of Um-SSC is available at https://github.com/leishida/Um-Classification. |
| Open Datasets | Yes | Datasets We train on widely adopted benchmarks MNIST, Fashion-MNIST, Kuzushiji-MNIST, and CIFAR-10. |
| Dataset Splits | No | The paper mentions 'training data' and 'test phase' but does not explicitly provide details for a distinct validation set or its split. |
| Hardware Specification | No | The paper does not specify the hardware used for running the experiments (e.g., GPU models, CPU types). |
| Software Dependencies | No | The paper mentions using 'Adam (Kingma & Ba, 2015) with the cross-entropy loss for optimization' but does not specify version numbers for Adam, the specific deep learning framework (e.g., PyTorch, TensorFlow), or other software dependencies. |
| Experiment Setup | Yes | We train 300 epochs for all the experiments, and the classification error rates at the test phase are reported. All the experiments are repeated 3 times and the mean values with standard deviations are recorded for each method. |