SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation

Authors: Wuxinlin Cheng, Chenhui Deng, Zhiqiang Zhao, Yaohui Cai, Zhiru Zhang, Zhuo Feng

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show the proposed SPADE method leads to promising empirical results for neural network models that are adversarially trained with the MNIST and CIFAR10 data sets. We conduct four different types of experiments to evaluate the efficacy of our proposed approach.
Researcher Affiliation Academia 1Stevens Institute of Technology, New Jersey, USA 2Cornell University, New York, USA.
Pseudocode No The paper describes the methods and steps in prose and mathematical formulas, but it does not include any explicitly labeled pseudocode blocks or algorithm listings.
Open Source Code Yes The SPADE source code is available at github.com/Feng Research/SPADE.
Open Datasets Yes MNIST consists of 70,000 images with the size of 28 28. CIFAR-10 consists of 60,000 images with the size of 32 32 3.
Dataset Splits No The paper mentions 'training' and 'testing' sets, stating 'For the k NN graph construction, we choose k = 10 (10 20) for the training (testing) set.' However, it does not explicitly describe a separate validation set or split for hyperparameter tuning or model selection, which is typically distinct from a test set.
Hardware Specification No The paper does not provide specific details about the hardware used for the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not list specific software dependencies with their version numbers (e.g., Python version, specific deep learning framework versions like PyTorch or TensorFlow).
Experiment Setup Yes The vanilla projected gradient decent (PGD) based adversarial training approach with perturbation magnitude ϵ {0.4} and {8.0, 12.0, 14.0} on MNIST and CIFAR-10, respectively. All the attacks use 0.01 step size. For the k NN graph construction, we choose k = 10 (10 20) for the training (testing) set.