Evaluating the Adversarial Robustness of Adaptive Test-time Defenses

Authors: Francesco Croce, Sven Gowal, Thomas Brunner, Evan Shelhamer, Matthias Hein, Taylan Cemgil

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We categorize such adaptive test-time defenses, explain their potential benefits and drawbacks, and evaluate a representative variety of the latest adaptive defenses for image classification. and In our case study, we evaluate nine adaptive test-time defenses which rely on the adaptation principles elaborated in Sec. 2.
Researcher Affiliation Collaboration 1University of Tübingen, Germany 2Deep Mind, London, United Kingdom 3Everyday Robots, Munich, Germany.
Pseudocode No The paper describes methods and processes in narrative text and figures, but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Finally, we note that the code developed for our case study is available at https://github.com/fra31/e valuating-adaptive-test-time-defenses.
Open Datasets Yes Table 1 summarizes the defenses considered in this case study, categorizes each defense (according to Sec. 2.1 and Sec. 2.2), and details the corresponding results against ℓ -norm bounded attacks with budget ϵ = 8/255 on CIFAR-10 (which is commonly evaluated by all defenses in their respective papers). and We use 5000 images from the CIFAR-10 training set to learn the linear projection.
Dataset Splits No The paper mentions 'training set' and 'test images' (e.g., for CIFAR-10) but does not provide specific details on a validation split (e.g., percentages, sample counts, or explicit methodology for validation partitioning) for reproducibility of its own experimental data setup.
Hardware Specification Yes For the purpose of this demonstration, and to be able to run our evaluation on a single NVIDIA V100 GPU, we reduce the batch size to 50 (instead of 512).
Software Dependencies No The paper mentions 'Modern frameworks such as Py Torch (Paszke et al., 2019) and JAX (Bradbury et al., 2018)' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes We use the original implementation with corresponding parameters, including integration time T = 5. and We set the number of defense iterations to N = 5 and learning rate to α = 10 3 and In practice, Yoon et al. set T = 10, S = 10 and σ = 0.25.