Learning with Multiple Complementary Labels

Authors: Lei Feng, Takuo Kaneko, Bo Han, Gang Niu, Bo An, Masashi Sugiyama

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct extensive experiments to evaluate the performance of our proposed approaches including the two wrappers, the unbiased risk estimator with various loss functions and the two upper-bound surrogate loss functions. Datasets. We use five widely-used benchmark datasets MNIST (Le Cun et al., 1998), Kuzushiji-MNIST (Clanuwat et al., 2018), Fashion-MNIST (Xiao et al., 2017), 20Newsgroups (Lang, 1995), and CIFAR-10 (Krizhevsky et al., 2009), and four datasets from the UCI repository (Blake & Merz, 1998). ... Table 2, Table 3, and Table 4 show the experimental results of different approaches...
Researcher Affiliation Academia 1School of Computer Science and Engineering, Nanyang Technological University, Singapore 2The University of Tokyo 3RIKEN Center for Advanced Intelligence Project 4Department of Computer Science, Hong Kong Baptist University.
Pseudocode No No pseudocode or algorithm blocks were found.
Open Source Code No The paper mentions 'We implement our approach using Py Torch1' with a footnote linking to PyTorch's website (www.pytorch.org), which is a third-party library, not the authors' own source code for their methodology. No other statements about open-sourcing their code were found.
Open Datasets Yes Datasets. We use five widely-used benchmark datasets MNIST (Le Cun et al., 1998), Kuzushiji-MNIST (Clanuwat et al., 2018), Fashion-MNIST (Xiao et al., 2017), 20Newsgroups (Lang, 1995), and CIFAR-10 (Krizhevsky et al., 2009), and four datasets from the UCI repository (Blake & Merz, 1998).
Dataset Splits Yes Hyperparameters for all the approaches are selected so as to maximize the accuracy on a validation set (10% of the training set) of complementarily labeled data.
Hardware Specification Yes All the experiments are conducted on NVIDIA Tesla V100 GPUs.
Software Dependencies No The paper mentions 'We implement our approach using Py Torch1' but does not specify a version number for PyTorch or any other software dependencies with their versions.
Experiment Setup Yes Learning rate and weight decay are selected from t10 6, 10 5, , 10 1u. We implement our approach using Py Torch1, and use the Adam (Kingma & Ba, 2015) optimization method with minibatch size set to 256 and epoch number set to 250.