Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness
Authors: Avrim Blum, Omar Montasser, Greg Shakhnarovich, Hongyang Zhang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct simple experiments that illustrate the utility of our theoretical contribution for boosting robustness. These experiments demonstrate that our algorithm, "-Ro Boost, can boost and improve the robustness of black-box learning algorithms. We describe the setup and the results below. On make_moons with perturbation radius = 0.1, the baseline Linear SVM achieves a robust accuracy of 84.78%, while "-Ro Boost (with 2 rounds of boosting) achieves robust accuracy of 89.86%. On MNIST with perturbation radius = 0.5, the baseline Linear SVM achieves a robust accuracy of 73.9%, while "-Ro Boost (with 2 rounds of boosting) achieves robust accuracy of 80.05%. |
| Researcher Affiliation | Academia | Avrim Blum avrim@ttic.edu TTI Chicago Omar Montasser omar@ttic.edu TTI Chicago Greg Shakhnarovich greg@ttic.edu TTI Chicago Hongyang Zhang first.last@uwaterloo.ca University of Waterloo |
| Pseudocode | Yes | Algorithm 1: "-Ro Boost Boosting barely robust learners. |
| Open Source Code | Yes | We include code to reproduce our MNIST experiments with perturbation radius = 1.0 in Appendix F. |
| Open Datasets | Yes | Datasets. A synthetic binary classication dataset (make_moons from scikit-learn), and MNIST (rescaled by dividing by 255, and converted to binary classication of odd vs. even). |
| Dataset Splits | No | The paper mentions datasets like make_moons and MNIST, and talks about running Linear SVM on them, but does not specify exact training, validation, or test dataset splits (e.g., 80/10/10 percentage or sample counts). |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models, memory) used for experiments are mentioned in the paper. |
| Software Dependencies | No | The paper mentions 'scikit-learn' as a dependency but does not provide specific version numbers for any software components. |
| Experiment Setup | Yes | Perturbation set U. We consider 2 perturbations of some radius . ... On make_moons with perturbation radius = 0.1, ... On MNIST with perturbation radius = 0.5, ... Finally, on MNIST with a bigger perturbation radius = 1.0, ... -Ro Boost (with 2 rounds of boosting) achieves robust accuracy... |