Searching to Exploit Memorization Effect in Learning with Noisy Labels
Authors: Quanming Yao, Hansi Yang, Bo Han, Gang Niu, James Tin-Yau Kwok
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments are performed on benchmark data sets. Results demonstrate that the proposed method is much better than the state-of-the-art noisy-label-learning approaches, and also much more efficient than existing Auto ML algorithms. |
| Researcher Affiliation | Collaboration | Quanming Yao 1 Paradigm Inc (Hong Kong) Hansi Yang 2 Department of Electrical Engineering, Tshinghua University Bo Han 3 Department of Computer Science, Hong Kong Baptist University 4 RIKEN Center for Advanced Intelligence Project 5 Department of Computer Science and Engineering, Hong Kong University of Science and Technology. |
| Pseudocode | Yes | Algorithm 1 General procedure on using sample selection to combat noisy labels. and Algorithm 2 Search to Exploit (S2E) algorithm for the minimization of the relaxed objective J in (6). |
| Open Source Code | No | The paper states 'All the codes are implemented in Py Torch 0.4.1, and run on a GTX 1080 Ti GPU.' but does not provide a link or explicit statement about making the code open source. |
| Open Datasets | Yes | We use three popular benchmark data sets: MNIST, CIFAR-10 and CIFAR-100. Following (Patrini et al., 2017; Han et al., 2018), we add two types of label noise: (i) symmetric flipping, which flips the label to other incorrect labels with equal probabilities; and (ii) pair flipping, which flips a pair of similar labels. |
| Dataset Splits | Yes | Let the noisy training (resp. clean validation) data set be Dtr (resp. Dval), the training (resp. validation) loss be Ltr (resp. Lval), and f be a neural network with model parameter w. We formulate the design of R( ) in Algorithm 1 as the following Auto ML problem: R = arg min R( ) F Lval(f(w ; R), Dval), s.t. w = arg min w Ltr(f(w; R), Dtr). |
| Hardware Specification | Yes | All the codes are implemented in Py Torch 0.4.1, and run on a GTX 1080 Ti GPU. |
| Software Dependencies | Yes | All the codes are implemented in Py Torch 0.4.1, and run on a GTX 1080 Ti GPU. |
| Experiment Setup | Yes | The detailed experimental setup is in Appendix A.1. and The detailed setup is in Appendix A.2.1. |