Asymmetric Loss Functions for Learning with Noisy Labels
Authors: Xiong Zhou, Xianming Liu, Junjun Jiang, Xin Gao, Xiangyang Ji
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on benchmark datasets demonstrate that asymmetric loss functions can outperform state-of-the-art methods. The code is available at https://github.com/hitcszx/ALFs |
| Researcher Affiliation | Academia | 1Harbin Institute of Technology 2Peng Cheng Laboratory 3King Abdullah University of Science and Technology 4Tsinghua University. |
| Pseudocode | No | No pseudocode or algorithm block found in the paper. |
| Open Source Code | Yes | The code is available at https://github.com/hitcszx/ALFs |
| Open Datasets | Yes | In this section, we empirically investigate asymmetric loss functions on benchmark datasets, including MNIST (Lecun et al., 1998), CIFAR-10/-100 (Krizhevsky & Hinton, 2009) , and a real-world noisy dataset Web Vision (Li et al., 2017). |
| Dataset Splits | No | The top-1 validation accuracies under different loss functions on the clean Web Vision validation set are reported in Table 3. |
| Hardware Specification | No | No specific hardware details (e.g., GPU models, CPU types) are provided in the paper. |
| Software Dependencies | No | The paper mentions general software components like 'deep neural networks' and 'Res Net-50' but does not specify any software names with version numbers (e.g., PyTorch 1.9) needed to replicate the experiment. |
| Experiment Setup | No | The noise generation, networks, training details, hyper-parameter settings and more experimental results can be found in the supplementary material. |