Self-Domain Adaptation for Face Anti-Spoofing

Authors: Jingjing Wang, Jingyi Zhang, Ying Bian, Youyi Cai, Chunmao Wang, Shiliang Pu2746-2754

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four public datasets validate the effectiveness of the proposed method.
Researcher Affiliation Industry Jingjing Wang, Jingyi Zhang, Ying Bian, Youyi Cai, Chunmao Wang, Shiliang Pu Hikvision Research Institute {wangjingjing9,zhangjingyi,bianying,caiyouyi,wangchunmao,pushiliang.hri}@hikvision.com
Pseudocode Yes Algorithm 1 Adaptor Learning by Meta-Learning
Open Source Code No The paper does not provide any explicit statement or link for open-source code availability.
Open Datasets Yes Four public face anti-spoofing datasets are utilized to evaluate the effectiveness of our method: OULU-NPU (Boulkenafet et al. 2017) (denoted as O), CASIA-MFSD (Zhang et al. 2012) (denoted as C), Idiap Replay-Attack (Ivana Chingovska and Marcel 2012) (denoted as I), and MSU-MFSD (Wen, Han, and Jain 2015) (denoted as M).
Dataset Splits No The paper mentions selecting a "meta-test domain Dval" for meta-learning, but it does not specify a general training/validation/test split for the overall datasets or specific sample counts for a validation set used for hyperparameter tuning.
Hardware Specification No The paper does not provide any specific details about the hardware used for the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper mentions "The Adam optimizer (Kingma and Ba 2014) is used for the optimization" but does not specify software versions for reproducibility (e.g., Python version, PyTorch/TensorFlow versions, Adam optimizer library version).
Experiment Setup Yes The Adam optimizer (Kingma and Ba 2014) is used for the optimization. The learning rates α, β are set as 1e-3. µ, λ are set to 10, 0.1 respectively. During training, the batch size is 20 per domain, and thus 40 for training domains totally. At inference, for efficiency we only optimize the adaptor for one epoch and the batch size is set to 20.