Content-based Unrestricted Adversarial Attack

Authors: Zhaoyu Chen, Bo Li, Shuang Wu, Kaixun Jiang, Shouhong Ding, Wenqiang Zhang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimentation and visualization demonstrate the efficacy of ACA, particularly in surpassing state-of-the-art attacks by an average of 13.3-50.4% and 16.8-48.0% in normally trained models and defense methods, respectively.
Researcher Affiliation Collaboration 1Academy for Engineering and Technology, Fudan University 2Youtu Lab, Tencent 3School of Computer Science, Fudan University
Pseudocode Yes Algorithm 1 Adversarial Content Attack
Open Source Code No The paper does not contain an explicit statement about releasing source code for the described methodology or a direct link to a code repository.
Open Datasets Yes Our experiments are conducted on the Image Net-compatible Dataset [29]. The dataset consists of 1,000 images from Image Net s validation set [8], and is widely used in [10, 13, 58, 60].
Dataset Splits No The paper uses "Image Net-compatible Dataset [29]" which consists of "1,000 images from Image Net s validation set [8]" as its experimental dataset, but does not describe any specific train/validation/test splits of this dataset for their attack or evaluation methodology.
Hardware Specification Yes Our experiments are run on an NVIDIA Tesla A100 with Pytorch.
Software Dependencies Yes The version of Stable Diffusion [42] is v1.4.
Experiment Setup Yes DDIM steps T = 50, image mapping iteration Ni = 10, attack iterations Na = 10, β = 0.1, ζ = 0.01, η = 0.04, κ = 0.1, and µ = 1.