Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates
Authors: Yang Liu, Hongyi Guo
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5. Experiments We implemented a two-layer Re LU Multi-Layer Perceptron (MLP) for classification tasks on 10 UCI Benchmarks and applied our peer loss to update their parameters. We show the robustness of peer loss with increasing rates of label noise on 10 real-world datasets. We compare the performance of our peer loss based method with surrogate loss method (Natarajan et al., 2013) (unbiased loss correction with known error rates), symmetric loss method (Ghosh et al., 2015), DMI (Xu et al., 2019), C-SVM (Liu et al., 2003) and PAM (Khardon & Wachman, 2007), which are state-of-the-art methods for dealing with random binary-classification noise, as well as a neural network baseline solution with binary cross entropy loss (NN). |
| Researcher Affiliation | Academia | 1Computer Science and Engineering, UC Santa Cruz, Santa Cruz, CA, USA 2Computer Science and Engineering, Shanghai Jiao Tong University, China. Correspondence to: Yang Liu <yangliu@ucsc.edu>, Hongyi Guo <guohongyi@sjtu.edu.cn>. |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | 4. Our implementation of peer loss functions is available at https://github.com/gohsyi/Peer Loss. |
| Open Datasets | Yes | We implemented a two-layer Re LU Multi-Layer Perceptron (MLP) for classification tasks on 10 UCI Benchmarks... Preliminary results on multi-class classification We provide preliminary results on CIFAR-10 (Krizhevsky et al., 2009) in Table 2. |
| Dataset Splits | No | We use a cross-validation set to tune the parameters specific to the algorithms. For p = 0.5, we use validation dataset (still with noisy labels) to tune α. The paper mentions using validation data but does not specify the splits (e.g., percentages, sample counts, or explicit methodology for creating the splits). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions implementing an MLP and using ResNet, but does not provide specific software versions for libraries, frameworks, or programming languages (e.g., PyTorch version, Python version). |
| Experiment Setup | No | The paper states that neural-network-based methods use the same hyperparameters and that parameters were tuned using a cross-validation set, but it does not provide the specific values for these hyperparameters (e.g., learning rate, batch size, number of epochs, optimizer settings) in the main text. |