Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems

Authors: Yue Gao, Ilia Shumailov, Kassem Fawaz

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we perform an empirical evaluation of our improved HR black-box attacks and five state-of-the-art defenses designed to protect the scaling procedure. Our evaluation is designed to answer the following questions. Q1: Can we improve black-box attacks by exploiting the scaling function to hide adversarial perturbation? Q2: Can we still improve black-box attacks when the scaling function is protected by defenses? ... We use Image Net (Russakovsky et al., 2015) and Celeb A (Liu et al., 2015) datasets. ... Evaluation Metrics. We use standard metrics: (1) scaled l2-norm ... (2) attack success rate (ASR) ...
Researcher Affiliation Academia Yue Gao 1 Ilia Shumailov 2 Kassem Fawaz 1 1University of Wisconsin Madison, Madison, WI, USA 2Vector Institute, Toronto, ON, Canada.
Pseudocode Yes Algorithm 1 Scaling-aware Noise Sampling (SNS) ... Algorithm 2 High-Resolution HSJ Attack (Simplified) ... Algorithm 3 High-Resolution Sign OPT Attack (Simplified)
Open Source Code Yes Our code is available at https://github.com/wi-pi/ rethinking-image-scaling-attacks.
Open Datasets Yes We use Image Net (Russakovsky et al., 2015) and Celeb A (Liu et al., 2015) datasets.
Dataset Splits No The paper states using pre-trained models and discusses images being correctly classified before attack, but does not provide specific training, validation, or test set splits for dataset partitioning.
Hardware Specification Yes We run all experiments on 8 Nvidia RTX 2080 Ti GPUs, each with 11 GB memory.
Software Dependencies No The paper mentions 'Torch Vision', 'Open CV', and 'Adversarial Robustness Toolbox' in the text without specific version numbers for all components.
Experiment Setup Yes For the C&W attack, we set the binary search step to 20 with a maximum of 1,000 iterations. The confidence parameter κ is set to {0, 1, ..., 10}. For the PGD attack, we set the number of steps to 100 with l2-norm budget ϵ = {1, 2, . . . 20} and step size 0.1 ϵ. Particularly, we did not change the default parameters used in black-box attacks; all optimization parameters are fixed to the official recommendation.