Error-Bounded Correction of Noisy Labels

Authors: Songzhu Zheng, Pengxiang Wu, Aman Goswami, Mayank Goswami, Dimitris Metaxas, Chao Chen

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our method on different datasets with various noise patterns and levels. Our theoretically-founded method outperforms state-of-the-arts due to its simplicity and due to its principled design.
Researcher Affiliation Collaboration 1Department of Applied Mathematics and Statistics, Stony Brook University, NY, USA 2Department of Computer Science, Rutgers University, NJ, USA 3Bain & Company, Bangalore, India. 4Department of Computer Science, City University of New York, NY, USA 5Department of Biomedical Informatics, Stony Brook University, NY, USA.
Pseudocode Yes Procedure 1 LRT-Correction and Procedure 2 Ada Corr are provided.
Open Source Code Yes The code of this paper can be found in https://github.com/pingqingsheng/LRT.git.
Open Datasets Yes Datasets. We use the following datasets: MNIST (Le Cun et al. 1998), CIFAR10 (Krizhevsky et al. 2009), CIFAR100 (Krizhevsky et al. 2009) and Model Net40 (Wu et al. 2015).
Dataset Splits Yes Similar to MNIST, we split 90% and 10% data from the official training set for the training and validation, respectively, and use the official test set for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions 'We utilize RAdam (Liu et al. 2019) for the network optimization', but does not specify version numbers for RAdam or other software dependencies.
Experiment Setup Yes We utilize RAdam (Liu et al. 2019) for the network optimization, and adopt batch size 128 for all the datasets. We use an initial learning rate of 0.001, which is decayed by 0.5 very 60 epochs.