Step Vulnerability Guided Mean Fluctuation Adversarial Attack against Conditional Diffusion Models

Authors: Hongwei Yu, Jiansheng Chen, Xinlong Ding, Yudong Zhang, Ting Tang, Huimin Ma

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our algorithm can steadily cause the mean shift of the predicted noises so as to disrupt the entire reverse generation process and degrade the generation results significantly. We also demonstrate that the step vulnerability is intrinsic to the reverse process by verifying its effectiveness in an attack method other than MFA.
Researcher Affiliation Academia Hongwei Yu1, Jiansheng Chen1*, Xinlong Ding1, Yudong Zhang2, Ting Tang1, Huimin Ma1 1School of Computer and Communication Engineering, University of Science and Technology Beijing, China 2Department of Electronic Engineering, Tsinghua University, China yuhongwei22@xs.ustb.edu.cn, jschen@ustb.edu.cn, dingxl22@xs.ustb.edu.cn, zhangyd16@mails.tsinghua.edu.cn, m202220901@xs.ustb.edu.cn, mhmpub@ustb.edu.cn
Pseudocode No The paper presents equations and a flowchart (Figure 1) to describe its algorithm, but it does not include a clearly labeled 'Pseudocode' or 'Algorithm' block with structured steps.
Open Source Code Yes Code and Supplementary is available at https://github.com/yuhongwei22/MFA
Open Datasets Yes For the inpainting task, we utiliz the Places dataset (Zhou et al. 2017). For the super-resolution task, we employ the Image Net dataset (Deng et al. 2009).
Dataset Splits No The paper mentions using specific datasets and selecting images for attack evaluation (e.g., 'We random select 2,000 images from Places365'), but it does not explicitly describe standard train/validation/test dataset splits or cross-validation for model training or evaluation of its proposed method in that context.
Hardware Specification Yes We use 8 NVIDIA RTX 3090 GPUs for all experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used (e.g., Python, PyTorch, CUDA), only mentioning the datasets and the models attacked.
Experiment Setup Yes We set per-step perturbation budget as 1/255, the total budget as 8/255, and attacking iterations as 70. We conduct our experiments on inpainting and superresolution tasks.