Why is SAM Robust to Label Noise?

Authors: Christina Baek, J Zico Kolter, Aditi Raghunathan

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Figure 1, we demonstrate this finding in CIFAR10 with 30% label noise, where SAM s best test accuracy is 17% higher. In particular, we find that the robustness gains are most prominent in a particular version of SAM called 1-SAM which applies the perturbation step to each sample in the minibatch separately. We conduct our experiments on CIFAR10 with Res Net18.
Researcher Affiliation Academia Christina Baek Zico Kolter Aditi Raghunathan Carnegie Mellon University {kbaek, zkolter, raditi} @andrew.cmu.edu
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper states, 'All code was implemented in JAX (Bradbury et al., 2018), and we utilize the Flax neural network library.', but does not provide a direct link to the source code for the methodology developed in the paper.
Open Datasets Yes In Figure 1, we demonstrate this finding in CIFAR10 with 30% label noise
Dataset Splits No The paper mentions 'training points' and 'test data' but does not explicitly provide details about training, validation, and test dataset splits needed for reproduction (e.g., percentages or counts for each split).
Hardware Specification Yes Our experiments were run on NVIDIA Quadro RTX A6000.
Software Dependencies No The paper mentions 'All code was implemented in JAX (Bradbury et al., 2018), and we utilize the Flax neural network library.', but it does not provide specific version numbers for these software components.
Experiment Setup Yes Parameter Value Batch size 128 Learning rate 0.01 Weight decay 0.0005 Epochs 200 ρ (for SAM) 0.01