Unified Physical-Digital Face Attack Detection

Authors: Hao Fang, Ajian Liu, Haocheng Yuan, Junze Zheng, Dingheng Zeng, Yanhong Liu, Jiankang Deng, Sergio Escalera, Xiaoming Liu, Jun Wan, Zhen Lei

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on Uni Attack Data and three other datasets demonstrate the superiority of our approach for unified face attack detection.
Researcher Affiliation Collaboration 1MAIS, Institute of Automation of Chinese Academy of Sciences, Beijing, China 2Macau University of Science and Technology (MUST), Macau, China 3Mashang Consumer Finance Co., Ltd., Chongqing, China 4Imperial College London, London, UK 5Computer Vision Center (CVC), Barcelona, Catalonia, Spain 6Department of Computer Science and Engineering, Michigan State University 7CAIR Hong Kong Institute of Science & Innovation, Chinese Academy of Sciences
Pseudocode No The paper describes its method but does not provide any pseudocode or algorithm blocks.
Open Source Code No Dataset link: https://sites.google.com/view/face-anti-spoofing-challenge/dataset-download/uniattackdatacvpr2024. This link is for the dataset, not the source code for the methodology. The paper does not contain an explicit statement about releasing its code for the described methodology.
Open Datasets Yes To address these issues, we collect a Unified physical-digital Attack dataset, called Uni Attack Data. The dataset consists of 1, 800 participations of 2 and 12 physical and digital attacks, respectively, resulting in a total of 28, 706 videos. Dataset link: https://sites.google.com/view/face-anti-spoofing-challenge/dataset-download/uniattackdatacvpr2024. To evaluate the performance of the proposed method and existing approaches, we employ four datasets for face forgery detection, i.e., Our proposed Uni Attack Data, Face Forensics++ (FF++) [Rossler et al., 2019], OULUNPU [Boulkenafet et al., 2017] and JFSFDB [Yu et al., 2022].
Dataset Splits Yes We define two protocols for Uni Attack Data. (1) Protocol 1 aims to evaluate under the unified attack detection task. As shown in Tab. 2, the training, validation, and test sets contain live faces and all attacks. Table 2: Amount of train/eval/test images of different types under three different protocols: P1, P2.1, and P2.2.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU models, CPU types, or cloud computing resources) used for running the experiments.
Software Dependencies No The paper mentions using CLIP and other models but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, CUDA versions).
Experiment Setup No Together with the Unified Knowledge Mining (UKM) loss, the final objective is defined as: LT otal = LCLS + λ LUKM where λ is a hyper-parameter to trade-off between two losses. This mentions a hyper-parameter 'λ' but does not provide its specific value or other detailed experimental setup parameters such as learning rate, batch size, or number of epochs.