Robust Inference via Generative Classifiers for Handling Noisy Labels
Authors: Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experimental results demonstrate the superiority of Ro G given different learning models optimized by several training techniques to handle diverse scenarios of noisy labels. |
| Researcher Affiliation | Collaboration | 1KAIST 2University of Michingan Ann Arbor 3Google Brain 4University of Illinois at Urbana Champaign 5AItrics. |
| Pseudocode | Yes | Algorithm 1 (Rousseeuw & Driessen, 1999) Approximating MCD for a single Gaussian. (page 4) |
| Open Source Code | Yes | Code is available at github.com/pokaxpoka/Ro GNoisy Label. |
| Open Datasets | Yes | For evaluation, we apply the proposed method to deep neural networks including Dense Net (Huang & Liu, 2017) and Res Net (He et al., 2016) for the classification tasks on CIFAR (Krizhevsky & Hinton, 2009), SVHN (Netzer et al., 2011), Twitter Part of Speech (Gimpel et al., 2011), and Reuters (Lewis et al., 2004) datasets with noisy labels. |
| Dataset Splits | Yes | For ensembles of generative classifiers, we induce the generative classifiers from basic blocks of the last dense (or residual) block of Dense Net (or Res Net), where ensemble weights of each layer are tuned on an additional validation set, which consists of 1000 images with noisy labels. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU models or CPU specifications. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names like PyTorch 1.9, Python 3.8) for reproducibility. |
| Experiment Setup | No | The paper mentions applying the method to deep neural networks and using various training methods but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size) or detailed optimizer settings in the main text. |