Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm

Authors: Mingkang Zhu, Tianlong Chen, Zhangyang Wang

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct comprehensive experiments with diverse setups to validate the effectiveness of proposed homotopy algorithm on the CIFAR-10 (Krizhevsky, 2009) and the Image Net (Deng et al., 2009) datasets.
Researcher Affiliation Academia 1The University of Texas at Austin, USA.
Pseudocode Yes Algorithm 1 Our Subroutine for Initial Weight Search (Lambda Search) and Algorithm 2 The Homotopy Attack Algorithm
Open Source Code Yes Our codes are available at: https://github.com/ VITA-Group/Sparse ADV_Homotopy.
Open Datasets Yes Extensive experiments on the CIFAR-10 (Krizhevsky, 2009) and Image Net (Deng et al., 2009) endorse the superiority of our new homotopy attack.
Dataset Splits Yes For nontargeted attack, we randomly select 5000 images from the test set of CIFAR-10, and 1000 images from the validation set of Image Net as the input images.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU models, or cloud computing instances with specifications) used to run its experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies (e.g., libraries, frameworks, or programming languages) used in the experiments.
Experiment Setup Yes Since we are highly interested in generating sparse and invisible adversarial perturbations while not extremely sparse but visible ones, we maintain a relatively small ℓ -norm of generated perturbations. That is, we set ϵ to 0.05, which is a relatively small number in the [0, 1] range of a valid image. ... The confidence parameter κ is set to 0.