Improving Adversarial Transferability via Intermediate-level Perturbation Decay

Authors: Qizhang Li, Yiwen Guo, Wangmeng Zuo, Hao Chen

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that it outperforms state-of-the-arts by large margins in attacking various victim models on Image Net (+10.07% on average) and CIFAR-10 (+3.88% on average).
Researcher Affiliation Collaboration Qizhang Li1,2, Yiwen Guo3 , Wangmeng Zuo1 , Hao Chen4 1Harbin Institute of Technology, 2Tencent Security Big Data Lab, 3Independent Researcher, 4UC Davis
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is at https://github.com/qizhangli/ILPD-attack.
Open Datasets Yes Our experiments were conducted on CIFAR-10 [24] and Image Net [35]... We performed adversarial attacks on all test data in CIFAR-10 and 5000 randomly sampled examples from the Image Net validation data.
Dataset Splits Yes We performed adversarial attacks on all test data in CIFAR-10 and 5000 randomly sampled examples from the Image Net validation data.
Hardware Specification Yes All experiments are performed on an NVIDIA V100 GPU.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We set the perturbation budget to ϵ = 4/255 and 8/255 for attacks on CIFAR-10 and Image Net, respectively... We run 100 iterations with a step size of 1/255 for all attack methods... ILPD was performed at the output of the fourth VGG block for VGG-19 on CIFAR-10 and the output of the last building block of the second Res Net meta layer for Res Net-50 on Image Net, with γ tuned in the range satisfying 0.1 ≤ 1/γ ≤ 0.5.