Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent Diffusion Model
Authors: Decheng Liu, Xijun Wang, Chunlei Peng, Nannan Wang, Ruimin Hu, Xinbo Gao
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive qualitative and quantitative experiments on the public FFHQ and Celeb A-HQ datasets prove the proposed method achieves superior performance compared with the state-of-the-art methods without an extra generative model training process. |
| Researcher Affiliation | Academia | 1School of Cyber Engineering, Xidian University, Xi an, China 2Key Laboratory of Artificial Intelligence, Ministry of Education, Shanghai, China 3School of Artifical Intelligence, Xidian University, Xi an, China 4School of Telecommunications Engineering, Xidian University, Xi an, China 5Hangzhou Institute of Technology, Xidian University, Xi an, China 6Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China |
| Pseudocode | Yes | Algorithm 1: Adv-Diffusion |
| Open Source Code | Yes | The source code is available at https://github.com/kopper-xdu/Adv-Diffusion. |
| Open Datasets | Yes | we use two publicly available face datasets for evaluation: (1) FFHQ is a widely used high-quality face dataset (Karras, Laine, and Aila 2019), which contains almost 70,000 high-quality face images with 1024 1024 resolution. (2) Celeb A-HQ is a high-quality face dataset (Karras et al. 2018) constructed based on the Celeb A dataset, which contains almost 30,000 face images with 512 512 resolution. |
| Dataset Splits | No | The paper describes how images are selected for evaluation but does not provide explicit training, validation, and test dataset splits for model training, as the method utilizes pre-trained models. |
| Hardware Specification | Yes | We conduct experiments on RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions software components like 'Py Torch implementation of EHANet' and 'open-source stable diffusion work' but does not specify version numbers for these software dependencies. |
| Experiment Setup | Yes | For experimental settings, we set 45 steps to generate adversarial samples. And we set s = 300 by default. |