Probabilistically Robust Learning: Balancing Average and Worst-case Performance

Authors: Alexander Robey, Luiz Chamon, George J. Pappas, Hamed Hassani

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental From a practical point of view, we propose a novel algorithm based on risk-aware optimization that effectively balances average and worst-case performance at a considerably lower computational cost relative to adversarial training. Our results on MNIST, CIFAR-10, and SVHN illustrate the advantages of this framework on the spectrum from average to worst-case robustness.
Researcher Affiliation Academia 1Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, USA 2University of California, Berkeley, Berkeley, CA, USA.
Pseudocode Yes Algorithm 1 Probabilistically Robust Learning (PRL)
Open Source Code Yes Our code is available at: https: //github.com/arobey1/advbench.
Open Datasets Yes We conclude our work by thoroughly evaluating the performance of the algorithm proposed in the previous section on three standard benchmarks: MNIST, CIFAR-10, and SVHN.
Dataset Splits Yes We conclude our work by thoroughly evaluating the performance of the algorithm proposed in the previous section on three standard benchmarks: MNIST, CIFAR-10, and SVHN.
Hardware Specification Yes All experiments were run across two four-GPU workstations, comprising a total of eight Quadro RTX 5000 GPUs.
Software Dependencies No The paper mentions optimizers like Adadelta and SGD, but it does not specify version numbers for these software components or any other libraries.
Experiment Setup Yes To train these models, we use the Adadelta optimizer (Zeiler, 2012) to minimize the cross-entropy loss for 150 epochs with no learning rate day and an initial learning rate of 1.0. All classifiers were evaluated with a 10-step PGD adversary. To compute the augmented accuracy, we sampled ten samples from r per data point, and to compute the Prob Acc metric, we sample 100 perturbations per data point. For CIFAR-10 (Krizhevsky et al., 2009) and SVHN (Netzer et al., 2011), we used the Res Net-18 architecture (He et al., 2016). We trained using SGD and an initial learning rate of 0.01 and a momentum of 0.9. We also used weight decay with a penalty weight of 3.5 10 3. All classifiers were trained for 115 epochs, and we decayed the learning rate by a factor of 10 at epochs 55, 75, and 90.