Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels

Authors: Pengfei Chen, Ben Ben Liao, Guangyong Chen, Shengyu Zhang

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both synthetic and real-world noisy labels show that compared with extensive state-of-the-art methods, our strategy consistently improves the generalization performance of DNNs under both synthetic and real-world training noise.
Researcher Affiliation Collaboration 1Department of Computer Science and Engineering, The Chinese University of Hong Kong 2Tencent Technology. Correspondence to: Guangyong Chen <gycchen@tencent.com>.
Pseudocode Yes Algorithm 1 Noisy Cross-Validation (NCV): selecting clean samples out of the noisy ones; Algorithm 2 Iterative Noisy Cross-Validation (INCV): selecting clean samples out of the noisy ones; Algorithm 3 Training DNNs robustly against noisy labels
Open Source Code Yes Our code is available at https://github.com/chenpf1025/ noisy_label_understanding_utilizing.
Open Datasets Yes Our method is verified on (i) the CIFAR-10 dataset (Krizhevsky & Hinton, 2009) with synthetic noisy labels generated by randomly flipping the original ones, and (ii) the Web Vision dataset (Li et al., 2017), which is a large benchmark consisting of 2.4 million images crawled from websites, containing real-world noisy labels.
Dataset Splits Yes Given a noisy dataset D, we implement cross-validation to randomly split it into two halves D1, D2, then train the Res Net-110 (He et al., 2016b) on D1 and test on D2.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU types, or memory.
Software Dependencies No The paper mentions that the implementation is based on PyTorch but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes In the experiments, we set the batch size |BS| to 128, then compute |BC| accordingly. ... we set n(e) = |BS|(1 εS min(e/10, 1)), which means we decrease n(e) from |BS| to |BS|(1 εS) linearly at the first 10 epochs and fix it after that.