Regularized Fine-Grained Meta Face Anti-Spoofing

Authors: Rui Shao, Xiangyuan Lan, Pong C. Yuen11974-11981

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four public datasets validate the effectiveness of the proposed method.
Researcher Affiliation Academia Rui Shao, Xiangyuan Lan, Pong C. Yuen Department of Computer Science, Hong Kong Baptist University, Hong Kong {ruishao, pcyuen}@comp.hkbu.edu.hk, xiangyuanlan@life.hkbu.edu.hk
Pseudocode Yes Algorithm 1 Regularized Fine-grained Meta Face Anti-spoofing
Open Source Code Yes Supplementary material and codes are available at https://github.com/rshaojimmy/AAAI2020-RFMeta FAS
Open Datasets Yes The evaluation of our method is conducted on four public face anti-spoofing datasets that contain both print and video replay attacks: Oulu-NPU (Boulkenafet and et al 2017) (O for short), CASIA-MFSD (Zhang and et al 2012) (C for short), Idiap Replay-Attack (Chingovska, Anjos, and Marcel 2012) (I for short), and MSU-MFSD (Wen, Han, and Jain 2015) (M for short).
Dataset Splits Yes To this end, at each training iteration, we divide the original N source domains by randomly selecting N − 1 domains as meta-train domains (denoted as Dtrn) and the remaining one as the meta-test domain (denoted as Dval).
Hardware Specification No The paper states, 'Our deep network is implemented on the platform of Py Torch.' but does not provide any specific details about the hardware used for experiments, such as CPU or GPU models, or memory specifications.
Software Dependencies No The paper mentions 'Our deep network is implemented on the platform of Py Torch.' and 'The Adam optimizer (Kingma and Ba 2014) is used for the optimization.' but does not specify version numbers for PyTorch, Adam, or any other software dependencies.
Experiment Setup Yes The Adam optimizer (Kingma and Ba 2014) is used for the optimization. The learning rates α, β are set as 1e-3. The batch size is 20 per domain, and thus 60 for 3 training domains totally.