Domain-Hallucinated Updating for Multi-Domain Face Anti-spoofing

Authors: Chengyang Hu, Ke-Yue Zhang, Taiping Yao, Shice Liu, Shouhong Ding, Xin Tan, Lizhuang Ma

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results and visualizations demonstrate that the proposed method outperforms state-of-the-art competitors in terms of effectiveness. Extensive experiments and analysis demonstrate the superiority of our method over the state of the competitors. We utilize the FASMD dataset based on Si W (Liu, Jourabloo, and Liu 2018), Si W-Mv2 (Liu et al. 2019) and OULU-NPU (Boulkenafet et al. 2017) to evaluate the proposed methods.
Researcher Affiliation Collaboration 1Shanghai Jiao Tong University 2Youtu Lab, Tencent 3East China Normal University 4Mo E Key Lab of Artificial Intelligence, Shanghai Jiao Tong University
Pseudocode Yes Algorithm 1: Training Procedure of DHU
Open Source Code No The paper does not contain an explicit statement about open-sourcing the code for the described methodology or a direct link to a code repository.
Open Datasets Yes We utilize the FASMD dataset based on Si W (Liu, Jourabloo, and Liu 2018), Si W-Mv2 (Liu et al. 2019) and OULU-NPU (Boulkenafet et al. 2017) to evaluate the proposed methods.
Dataset Splits No The paper describes training on source datasets (A) and then other datasets (B-E) and testing on different protocols (e.g., A B, A C). It states 'Strictly following the setting (Guo et al. 2022)'. While it references the method for splitting, it does not explicitly provide the training/validation/test dataset splits (e.g., percentages or counts) within this paper for direct reproduction.
Hardware Specification Yes We use the public Pytorch (Paszke et al. 2017) framework with 32G Tesla V100 on Linux OS to implement our framework.
Software Dependencies No The paper mentions using 'Pytorch (Paszke et al. 2017) framework' and 'Linux OS' but does not provide specific version numbers for PyTorch or other key software libraries to ensure reproducible software dependencies.
Experiment Setup Yes The input is detected face region normalized to size 256 256 with RGB channels. The extractor used is Res Net18 (He et al. 2016) with 4 layers. The size of buffer NB is 200 with NB0 : NB1 = 1 : 1 and the dimension of domain information is ds = 512 from the first 2 layers. The batch size is 16, and the ratio of the real and fake images is 1 : 1. We set l = 2, κ = 1/0.7. The learning rate is 1e-2 and we train each dataset for 50,000 steps.