Confidence Scores Make Instance-dependent Label-noise Learning Possible
Authors: Antonin Berthon, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the utility and effectiveness of our method through multiple experiments on datasets with synthetic label noise and real-world unknown noise. |
| Researcher Affiliation | Academia | 1RIKEN 2ENS Paris-Saclay 3Hong Kong Baptist University 4University of Sydney 5University of Tokyo. |
| Pseudocode | Yes | Algorithm 1 Instance-Level Forward Correction (ILFC). |
| Open Source Code | Yes | The code is available at https://github.com/ antoninbrthn/CSIDN. |
| Open Datasets | Yes | In order to corrupt labels from clean datasets such as SVHN and CIFAR10... Finally, we demonstrate the effectiveness of our method on Clothing1M (Tong Xiao et al., 2015)... |
| Dataset Splits | No | The paper mentions a "small validation set" for calibration on Clothing1M, but does not provide specific details on its size, percentage split, or how it was generated, which is required for reproducibility (e.g., "small validation set" is not a specific dataset split information). |
| Hardware Specification | Yes | All models are trained on a NVIDIA Tesla K80 GPU. |
| Software Dependencies | No | The paper mentions optimizers (Adam) and model architectures (Resnet-18), but it does not list specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9, CUDA 11.1). |
| Experiment Setup | Yes | The learning rate is set to 1.0 10 4 and decreased by a factor of 10 every 15 epochs. We set the batch size to 64. |