Understanding Instance-Level Label Noise: Disparate Impacts and Treatments
Authors: Yang Liu
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We simulate a 2D example: there are two classes of instances. The outer annulus represents one class and the inner ball is the other. Given the plotted training data, we train a 2-layer neural network using the cross-entropy (CE) loss. ... We further illustrate this in Figure 3 where we train a neural network on the CIFAR-10 dataset with synthesized noisy labels. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, University of California, Santa Cruz, CA, USA. Correspondence to: Yang Liu <yangliu@ucsc.edu>. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | Figure 1 shows a collection of 10 similar Cats" from the CIFAR-10 dataset (Krizhevsky et al., 2009). ... Figure 3 where we train a neural network on the CIFAR-10 dataset with synthesized noisy labels. |
| Dataset Splits | No | The paper does not provide specific dataset split information (e.g., percentages, sample counts) for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers). |
| Experiment Setup | No | The paper mentions training a '2-layer neural network using the cross-entropy (CE) loss' but does not provide specific hyperparameters (e.g., learning rate, batch size, number of epochs) or detailed training configurations in the main text. |