Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Adversarial Attacks on Both Face Recognition and Face Anti-spoofing Models
Authors: Fengfan Zhou, Qianyu Zhou, Heifei Ling, Xuequan Lu
IJCAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments showcase the superiority of our proposed attack method to state-of-the-art adversarial attacks. ... 4 Experiment In our experiments, we demonstrate the superiority and key properties of the proposed method. Section 4.1 details the experimental settings, while Section 4.2 presents the comparative results. Additionally, Section 4.3 provides an analysis of the ablation studies. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, Huazhong University of Science and Technology 2College of Computer Science and Technology, Jilin University 3 Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, JLU 4Department of Computer Science and Software Engineering, The University of Western Australia EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using text and mathematical equations, but it does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about code availability, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We use the Oulu-NPU [Boulkenafet et al., 2017] and CASIA-AMFSD [Zhang et al., 2012] for evaluation. |
| Dataset Splits | Yes | We randomly sample 1,000 negative image pairs from both datasets. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions the use of various models (e.g., IR152, IRSE50, Face Net, Mobile Face) but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We employ MF as the surrogate model, and craft adversarial examples on the OULU-NPU. ... The objective of adversarial attacks on FR in our research is to generate an adversarial example xadv that induces the victim FR model Fvct to misclassify it as the target sample xt, while also preserving a high level of visual similarity between xadv and xs. Specifically, the objective can be stated as follows: xadv = arg min xadv D Fvct xadv , Fvct xt s.t. xadv xs p ϵ where D refers to a predefined distance metric, while ϵ specifies the maximum magnitude of permissible perturbation. ... x is optimized over b iterations using the following formula [Kurakin et al., 2017]: x αk,t 1 sign x αk,t 1 b Lk,αk x αk,t 1 |