Learning Meta Model for Zero- and Few-Shot Face Anti-Spoofing

Authors: Yunxiao Qin, Chenxu Zhao, Xiangyu Zhu, Zezheng Wang, Zitong Yu, Tianyu Fu, Feng Zhou, Jingping Shi, Zhen Lei11916-11923

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show its superior performances on the presented benchmarks to existing methods in existing zero-shot FAS protocols. Comprehensive experiments are conducted to show that AIM-FAS achieves state-of-the-art results on zeroand few-shot anti-spoofing benchmarks. 5 Experiments 5.1 Experiment Setup Performance Metrics In our experiments, AIM-FAS is evaluated by: 1) Attack Presentation Classification Error Rate (APCER); 2) Bona Fide Presentation Classification Error Rate (BPCER); 3) ACER (international organization for standardization 2016), which evaluates the mean of APCER and BPCER. 4) Area Under Curve (AUC). Evaluation Process On all benchmarks, we evaluate the meta-learner s zeroand few-shot FAS performance in the following way: 1) train the meta-learner on the training tasks generated on the train set; 2) test the meta-learner on zeroand few-shot FAS testing tasks on the test set; 3) calculate the meta-learner s performance with Eq.6.
Researcher Affiliation Collaboration Yunxiao Qin,1,2 Chenxu Zhao,2 Xiangyu Zhu,3 Zezheng Wang,2 Zitong Yu,4 Tianyu Fu,5 Feng Zhou,2 Jingping Shi,1 Zhen Lei3 1Northwestern Polytechnical University, Xian, China, 2AIBEE, Beijing, China 3National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science, Beijing, China 4CMVS, University of Oulu, Oulu, Finland, 5Winsense Technology Ltd, Beijing, China
Pseudocode Yes Algorithm 1 AIM-FAS in training stage. Algorithm 2 AIM-FAS in testing stage.
Open Source Code No The paper states: 'All three benchmarks will be released soon.' referring to the datasets, but it does not mention releasing the source code for the AIM-FAS methodology itself.
Open Datasets Yes To verify AIM-FAS, we propose three Zeroand Few-shot FAS benchmarks:OULU-ZF, Cross-ZF and SURF-ZF. All three benchmarks will be released soon. OULU-ZF is a single domain zeroand few-shot FAS benchmark and is build based on OULU-NPU. Cross-ZF is a cross domain zeroand few-shot FAS benchmark which is more challenging than OULU-ZF. It contains more varied living and spoofing categories. We build Cross-ZF based on several public FAS dataset: CASIA-MFSD, MSU-MFSD, and Si W. SURF-ZF is a cross modal zeroand few-shot FAS benchmark. We build SURF-ZF based on the CASIA-SURF dataset.
Dataset Splits Yes To generate zeroand few-shot FAS tasks, we split the living and spoofing faces into finegrained pattern, and show the fine-grained dataset structure in Fig.2(a). We show an example of K-shot FAS task in Fig.2(b), and generate the K-shot (K >= 0) FAS tasks in the following way: 1) sample one fine-grained living category Li and one spoofing category Sm, from the train set. ... 4) sample K + Q faces from each of Lj and Sn. 5) build the query set with 2Q faces from Lj and Sn, and build the support set with the other 2 (M K) + 2 K = 2M faces. ... Table 1: Zeroand few-shot FAS benchmark: OULU-ZF. Set Device Subjects PSAI Train Phone 1,2,4,5,6 1-20 Living 2,3; Print 1;Replay 1 Val Phone 3 21-35 Living 1-3; Print 1,2; Replay 1,2 Test Phone 1,2,4,5,6 36-55 Living 1; Print 2; Replay 2
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory amounts used for running the experiments. It lacks concrete specifications like 'NVIDIA A100' or 'Intel Xeon'.
Software Dependencies No The paper mentions 'Python' implicitly as a common language for deep learning and 'PyTorch' can be inferred from the typical stack for meta-learning, but it does not specify any software names with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9').
Experiment Setup Yes We generate 100,000 training tasks on the train set and 100 (T = 100) testing tasks on the test set. For each K-shot training task, K is randomly sampled from {0,1,3,5,7,9}. For testing tasks, K is a specified number indicating the meta-learner is tested on specified K-shot tasks. For example, if we evaluate the meta-learner s performance on zeroshot FAS tasks, we set K = 0 and generate 100 such zeroshot testing tasks to test the meta-learner. We set Q to 15, M to 10. The meta batch size is set to 8, and the meta-learning rate β is set to 0.0001. The AIU parameters α and γ are initialized to 0.001 and 1, respectively. The training process of AIM-FAS is shown in Algorithm 1 and Fig.2(b). We firstly pre-train the meta-learner to learn the prior knowledge about FAS on the train set (line 2 in Algorithm 1), and secondly meta-train the meta-learner on zeroand few-shot FAS training tasks.