Sample Selection with Uncertainty of Losses for Learning with Noisy Labels

Authors: Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that the proposed method is superior to baselines and robust to a broad range of label noise types.
Researcher Affiliation Academia 1TML Lab, The University of Sydney 2Hong Kong Baptist University 3The University of Melbourne 4University of Science and Technology of China 5RIKEN AIP 6The University of Tokyo
Pseudocode Yes Algorithm 1 CNLCU Algorithm. 1: Input θ1 and θ2, learning rate η, fixed τ, epoch Tk and Tmax, iteration tmax;
Open Source Code No The paper does not contain any statement about making the source code available or provide a link to a code repository.
Open Datasets Yes Datasets. We verify the effectiveness of our method on the manually corrupted version of the following datasets: MNIST (Le Cun et al.), F-MNIST (Xiao et al., 2017), CIFAR-10 (Krizhevsky, 2009), and CIFAR-100 (Krizhevsky, 2009)
Dataset Splits Yes We leave out 10% of noisy training examples as a validation set.
Hardware Specification Yes implement all methods with default parameters by Py Torch, and conduct all the experiments on NVIDIA Titan Xp GPUs.
Software Dependencies No The paper mentions software like PyTorch and the Adam optimizer but does not specify their version numbers, which is required for reproducibility.
Experiment Setup Yes For all experiments, the Adam optimizer (Kingma & Ba, 2014) (momentum=0.9) is used with an initial learning rate of 0.001, and the batch size is set to 128 and we run 200 epochs. We linearly decay learning rate to zero from 80 to 200 epochs as did in (Han et al., 2018).