Label Distribution for Learning with Noisy Labels

Authors: Yun-Peng Liu, Ning Xu, Yu Zhang, Xin Geng

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both synthetic and real-world datasets substantiate the superiority of the proposed algorithm against state-of-the-art methods.
Researcher Affiliation Academia MOE Key Laboratory of Computer Network and Information Integration, China School of Computer Science and Engineering, Southeast University, Nanjing 210096, China {yunpengliu, xning, zhang yu, xgeng}@seu.edu.cn
Pseudocode Yes Algorithm 1 Label Distribution based Confidence Estimation
Open Source Code No The paper does not provide any statement or link regarding the open-sourcing of the described methodology's code.
Open Datasets Yes The experiments are conducted on CIFAR10 and CIFAR100 [Krizhevsky et al., 2009] with synthetic label noise and Clothing1M [Xiao et al., 2015] with real-world label noise.
Dataset Splits Yes The training set is split into two parts with the trusted fraction of 5% and 10%. Then, the synthetic label noise is added into the untrusted set. The validation set and test set have 14,313 and 10,526 images respectively.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory) used for running its experiments.
Software Dependencies No The experiments are implemented with Py Torch framework.
Experiment Setup Yes For estimation model... The learning rate is 0.1 with a deacy step 60 and a decay rate 0.1. The hyper-parameters is α=0.6, δ=0.5. ... For classifier model... The learning rate is 0.1 with a multi-step deacy [60, 80, 90] and a deacy rate 0.2. For both estimation model and classifier model, we use SGD optimizer with 0.9 momentum, a ℓ2 weight decay 1 10 4 and train the models for 100 epochs.