Understanding and Mitigating the Tradeoff between Robustness and Accuracy

Authors: Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, Percy Liang

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, for neural networks, we find that RST with different adversarial training methods improves both standard and robust error for random and adversarial ℓ perturbations in CIFAR-10.
Researcher Affiliation Academia 1Stanford University 2ETH Zurich
Pseudocode Yes Appendix D gives a practical iterative algorithm that computes the RST estimator for linear regression reminiscent of adversarial training in the semi-supervised setting. (Refers to Algorithm 1 in Appendix D)
Open Source Code Yes Code for our experiments are available here.
Open Datasets Yes Empirically on CIFAR-10, we find that the gap between the standard error of adversarial training and standard training decreases as we increase the labeled data size... and ...when using unlabeled data from Tiny Images as sourced in Carmon et al. (2019).
Dataset Splits No The paper discusses varying 'labeled training set sizes' and uses datasets like CIFAR-10, which have standard splits, but it does not explicitly state the train/validation/test percentages or counts for its specific experiments in the main text.
Hardware Specification No The paper mentions using WRN-28-10 and WRN-40-2 models and training parameters, but does not specify any hardware details such as GPU or CPU models.
Software Dependencies No The paper does not explicitly list any software dependencies with specific version numbers used for the experiments.
Experiment Setup Yes All methods use ϵ = 8/255 while training and use the WRN-28-10 model. Robust accuracies are against a PG based attack with 20 steps. All adversarial and random methods use the same parameters during training and use the WRN-40-2 model.