Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression

Authors: Zhuoran Liu, Zhengyu Zhao, Martha Larson

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present extensive experiments showing that 12 state-of-the-art PAP methods are vulnerable to Image Shortcut Squeezing (ISS), which is based on simple compression. For example, on average, ISS restores the CIFAR10 model accuracy to 81.73%, surpassing the previous best preprocessing-based countermeasures by 37.97% absolute.
Researcher Affiliation Academia 1Radboud University, Nijmegen, Netherlands 2Xi an Jiaotong University, Xi an, China 3CISPA Helmholtz Center for Information Security, Saarbr ucken, Germany.
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/ liuzrcc/Image Shortcut Squeezing.
Open Datasets Yes We consider three datasets: CIFAR10 (Krizhevsky, 2009), CIFAR-100 (Krizhevsky, 2009), and a 100-class subset of Image Net (Deng et al., 2009).
Dataset Splits No While the paper specifies training and testing image counts for CIFAR-10/100 (50000 training, 10000 testing) and uses the 'official validation set for testing' for ImageNet, it does not explicitly define a separate validation split for hyperparameter tuning during training across all datasets, nor does it provide a comprehensive train/validation/test split for all setups described.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as exact GPU/CPU models or cloud instance specifications.
Software Dependencies No The paper mentions software components like "torchvision transforms module" but does not specify their version numbers, which are necessary for reproducible ancillary software details.
Experiment Setup Yes We train the CIFAR10/100 models for 60 epochs and the Image Net models for 100 epochs. We use SGD with a momentum of 0.9, a learning rate of 0.025, and cosine weight decay. ... If not explicitly mentioned, we use JPEG with a quality factor of 10 and bit depth reduction (BDR) with 2 bits. ... For adversarial training (AT), PGD-10 is used with a step size of 2 255, where the model is trained on CIFAR-10 for 100 epochs. We use a kernel size of 3 for both median, mean, and Gaussian smoothing (with a standard deviation of 0.1).