GreedyFool: Distortion-Aware Sparse Adversarial Attack

Authors: Xiaoyi Dong, Dongdong Chen, Jianmin Bao, Chuan Qin, Lu Yuan, Weiming Zhang, Nenghai Yu, Dong Chen

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that compared with the startof-the-art method, we only need to modify 3 fewer pixels under the same sparse perturbation setting. For target attack, the success rate of our method is 9.96% higher than the start-of-the-art method under the same pixel budget. Code can be found at https://github.com/Light DXY/Greedy Fool. Experiments on the CIFAR10 [22] and Image Net [12] dataset show that the sparsity of our method is much better than state-of-the-art methods. To summarize, the main contributions of this paper are threefold: ... 3) Extensive experiments have demonstrated the superb performance of our method.
Researcher Affiliation Collaboration Xiaoyi Dong1 , Dongdong Chen2 , Jianmin Bao2, Chuan Qin1, Lu Yuan2, Weiming Zhang1, Nenghai Yu1, Dong Chen2 1University of Science and Technology of China 2Microsoft Research {dlight@mail., qc94@mail., zhangwm@, ynh@ }.ustc.edu.cn cddlyf@gmail.com , {jianbao, luyuan, doch }@microsoft.com
Pseudocode Yes Algorithm 1 Greedy Fool Input: Source image x, target model H,distortion map ϱ. Parameter: Max iterations T, threshold ϵ, select number k. Output: adversarial sample xadv
Open Source Code Yes Code can be found at https://github.com/Light DXY/Greedy Fool.
Open Datasets Yes Experiments on the CIFAR10 [22] and Image Net [12] dataset show that the sparsity of our method is much better than state-of-the-art methods.
Dataset Splits Yes we generate adversarial samples with 5000 images randomly selected from the Image Net validation set and 10000 images from the CIFAR10 test set.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU specifications, or memory.
Software Dependencies No The paper states 'we use the official implementation of Sparse Fool [1] and PGD0 [2] and follow their default settings. For JSMA, we use the implementation from Fool Box2.4 [3].' While Fool Box2.4 has a version, it is a third-party tool used for comparison, not a direct dependency of the authors' own method with a specified version. No other specific software dependencies with version numbers are mentioned for their own implementation.
Experiment Setup Yes κ is a confidence factor to control the attack strength, we set κ = 0 by default and enlarge it for better black-box transferability. where τ1 and τ2 are predefined thresholds and set as the 70, 25 percentile of ϱ by default. For our Greedy Fool, we set the select number k = 1 when ϵ 128. When ϵ < 128, we initialize k with 1 and increase 1 after each iteration for faster speed. δ is a predefined threshold and we set δ = 8/255 in our experiments. where λ is loss weight for the regularization loss, we choose 1e 5 in our experiment by default.