Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical
Authors: Wei Wang, Takashi Ishida, Yu-Jie Zhang, Gang Niu, Masashi Sugiyama
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on both synthetic and real-world benchmark datasets validate the superiority of our proposed approach over state-of-the-art methods. |
| Researcher Affiliation | Academia | 1The University of Tokyo 2RIKEN. |
| Pseudocode | Yes | Algorithm 1 SCARCE |
| Open Source Code | Yes | Our implementation of SCARCE is available at https://github.com/wwangwitsel/SCARCE. |
| Open Datasets | Yes | We conducted experiments on synthetic benchmark datasets, including MNIST (Le Cun et al., 1998), Kuzushiji-MNIST (Clanuwat et al., 2018), Fashion-MNIST (Xiao et al., 2017), and CIFAR-10 (Krizhevsky & Hinton, 2009). |
| Dataset Splits | Yes | The training curves and test curves of the method that works by minimizing the URE in Eq. (9) are shown in Figure 1. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | All the methods were implemented in Py Torch (Paszke et al., 2019). We used the Adam optimizer (Kingma & Ba, 2015). |
| Experiment Setup | Yes | The learning rate and batch size were fixed to 1e-3 and 256 for all the datasets, respectively. The weight decay was 1e-3 for CIFAR-10 and 1e-5 for the other three datasets. The number of epochs was set to 200, and we recorded the mean accuracy in the last ten epochs. |