Computational Asymmetries in Robust Classification
Authors: Samuele Marro, Michele Lombardi
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, since our results emphasize the connection between verification and attack problems, we provide an empirical investigation of the use of heuristic attacks for verification. We found heuristic attacks to be high-quality approximators for exact decision boundary distances: a pool of seven heuristic attacks provided an accurate (average over-estimate between 2.04% and 4.65%) and predictable (average R2 > 0.99) approximation of the true optimum for small-scale Neural Networks trained on the MNIST and CIFAR10 datasets. |
| Researcher Affiliation | Academia | Samuele Marro 1 Michele Lombardi 1 1Department of Computer Science, University of Bologna. Correspondence to: Samuele Marro <samuele.marro@unibo.it>. |
| Pseudocode | No | The paper includes mathematical definitions and proofs but no distinct pseudocode or algorithm blocks. |
| Open Source Code | Yes | All our code, models, and data are available under MIT license at https://github.com/samuelemarro/counter-attack. |
| Open Datasets | Yes | We randomly selected 2.3k samples each from the test set of two datasets, MNIST and CIFAR10. We release1 our benchmarks and adversarial examples (both exact and heuristic) in a new dataset, named UG100. MNIST (Le Cun et al., 1998) and CIFAR10 (Krizhevsky et al., 2009) datasets. |
| Dataset Splits | Yes | We test this type of buffer using 5-fold cross-validation on each configuration. |
| Hardware Specification | Yes | Each node of the cluster has 384 GB of RAM and features two Intel Cascade Lake 8260 CPUs, each with 24 cores and a clock frequency of 2.4GHz. on a single machine with an AMD Ryzen 5 1600X six-core 3.6 GHz processor, 16 GBs of RAM and an NVIDIA GTX 1060 6 GB GPU. |
| Software Dependencies | Yes | All our code is written in Python + Py Torch (Paszke et al., 2019), with the exception of the MIPVerify interface, which is written in Julia. For the Basic Iterative Method (BIM), the Fast Gradient Sign Method (FGSM) and the Projected Gradient Descent (PGD) attack, we used the implementations provided by the Adver Torch library (Ding et al., 2019). We ran MIPVerify using the Julia library MIPVerify.jl and Gurobi (Gurobi Optimization, LLC, 2022). |
| Experiment Setup | Yes | We report the chosen architectures in Tables 2 and 3, while Table 4 outlines their accuracies and parameter counts. We also report the same parameters in Table 1. (Table 1 lists specific hyperparameters like 'Epochs 425', 'Learning Rate 1e-4', 'Batch Size 32', 'Attack #Iterations 200', 'ε 0.05', etc.) |