Learning from Similarity-Confidence Data
Authors: Yuzhou Cao, Lei Feng, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on various datasets and deep neural networks clearly demonstrate the effectiveness of the proposed Sconf learning method and risk correction scheme (in Section 7). |
| Researcher Affiliation | Academia | 1College of Science, China Agricultural University, Beijing, China 2College of Computer Science, Chongqing University, Chongqing, China 3RIKEN Center for Advanced Intelligence Project, Tokyo, Japan 4Nanyang Technological University, School of Computer Science and Engineering, Singapore 5The University of Tokyo, Tokyo, Japan. |
| Pseudocode | No | The paper does not include a section or figure explicitly labeled "Pseudocode" or "Algorithm" with structured steps. |
| Open Source Code | No | The paper does not provide an explicit statement about open-sourcing its code or a link to a code repository. |
| Open Datasets | Yes | We evaluated the performance of proposed methods on six widely-used benchmarks MNIST (Le Cun et al., 1998), Fashion-MNIST (Xiao et al.), Kuzushiji-MNIST (Clanuwat et al., 2018), EMNIST (Cohen et al., 2017), SVHN (Netzer et al., 2011), and CIFAR-10 (Krizhevsky, 2012). |
| Dataset Splits | No | For ERM-based methods: Sconf-Unbiased, Sconf-ABS, Sconf-NN, and SD, the validation accuracy was also calculated according to their empirical risk estimators on a validation set consisted of Sconf data, which means that we do not have to collect additional ordinarily labeled data for validation when using ERM-based methods. |
| Hardware Specification | Yes | We implemented all the methods by Pytorch (Paszke et al., 2019), and conducted the experiments on NVIDIA Tesla P4 GPUs. |
| Software Dependencies | No | The paper mentions "Pytorch (Paszke et al., 2019)" and "Adam (Kingma & Ba, 2015)" but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | We trained the model with Adam for 100 epochs (full-batch size) and default momentum parameter. The learning rate was initially set to 0.1 and divided by 10 every 30 epochs. |