Detecting Adversarial Faces Using Only Real Face Self-Perturbations
Authors: Qian Wang, Yongqin Xian, Hefei Ling, Jinyuan Zhang, Xiaorui Lin, Ping Li, Jiazhong Chen, Ning Yu
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments conducted on LFW and Celeb A-HQ datasets with eight gradient-based and two GAN-based attacks validate that our method generalizes to a variety of unseen adversarial attacks. |
| Researcher Affiliation | Collaboration | 1Huazhong University of Science and Technology, Wuhan, China 2Google, Switzerland 3Software Development Center, Industrial and Commercial Bank of China 4Salesforce Research, USA |
| Pseudocode | Yes | Algorithm 1 Self-perturbation for gradient-based attack |
| Open Source Code | Yes | Code at https://github.com/cc13qq/SAPD |
| Open Datasets | Yes | Face images in this work are sampled from LFW [Gary et al., 2007] and Celeb A-HQ [Karras et al., 2017] datasets. |
| Dataset Splits | No | No explicit training/validation/test dataset splits with specific percentages or counts for a separate validation set were provided in the main text. The paper describes a training phase and a testing phase but does not detail a distinct validation split. |
| Hardware Specification | No | The paper mentions training a convolutional neural network and using Xception Net as a backbone, but does not provide specific details about the hardware used (e.g., GPU model, CPU, memory). |
| Software Dependencies | No | The paper mentions using 'DLIB', 'Torchattacks', and 'Open OOD' but does not specify their version numbers, which are required for reproducibility. |
| Experiment Setup | Yes | We set N = 7 to produce a 7 7 feature map in the last convolution layer and choose Re LU as an activation function. The perturbation magnitude ϵ in self-perturbations and adv-faces producing is set to 5/255, a small value. Threshold γ in the convex hull of gradient image in Algorithm 2 is set to 50. The regularization loss weight β is set to 0.1. Training epochs are set to 5 and convergence is witnessed. |