Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition
Authors: Bangjie Yin, Wenxuan Wang, Taiping Yao, Junfeng Guo, Zelun Kong, Shouhong Ding, Jilin Li, Cong Liu
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive and thorough experiments over the makeup dataset and LFW dataset, Adv-Makeup is proved to be able to generate imperceptible and transferable adversarial examples. A case study implementing Adv-Makeup in the physical world against two popular commercial FR platforms also demonstrates its efficacy in practice. |
| Researcher Affiliation | Collaboration | 1Youtu Lab, Tencent, Shanghai, China, 2Fudan University, Shanghai, China, 3The University of Texas at Dallas, Dallas, Texas, USA |
| Pseudocode | Yes | Algorithm 1: The proposed Adv-Makeup. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-sourcing of their code for the methodology described. |
| Open Datasets | Yes | Two public datasets are utilized in the following experiments: 1) LFW [Huang et al., 2008] contains 13233/5749 web-collected images/subjects with 6000 comparisons of same/different identities, most of them are lowquality. 2) [Gu et al., 2019] released a high-quality makeup face database, including 333 frontal before-makeup faces and 302 after-makeup faces. |
| Dataset Splits | No | The paper specifies how data is used for training and evaluation (e.g., 'random 100 source before-makeup faces and 10 targets form 1000 comparisons for impersonation attacks'), but it does not provide explicit train/validation/test dataset split percentages or absolute sample counts for each split for the datasets used. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions models and architectures used (e.g., 'pre-trained VGG16 model', 'IR152', 'IRSE50', 'Mobile Face', 'Facenet', 'LADN with a U-Net structure') but does not specify any software dependencies with version numbers (e.g., specific programming languages, libraries, or frameworks with their versions). |
| Experiment Setup | Yes | The architectures of encoder and decoder are based on LADN [Gu et al., 2019] with a U-Net structure. We employ an initial learning rate as 0.001 with momentum 0.9 via SGD. To balance different losses effects in Eq.9, we set α1, α2, β1, β2, β3 to be 1, 1, 0.1, 0.1, 0.1, respectively. |