Cyclically Disentangled Feature Translation for Face Anti-spoofing

Authors: Haixiao Yue, Keyao Wang, Guosheng Zhang, Haocheng Feng, Junyu Han, Errui Ding, Jingdong Wang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on several public datasets demonstrate that our proposed approach significantly outperforms the state of the art. Code and models are available at https://github.com/vis-face/CDFTN. ... Experiments Databases We provide our evaluations on four publicly available databases for cross-domain FAS: CASIA-MFSD (Zhang et al. 2012) (C for short), Replay-Attack (Chingovska, Anjos, and Marcel 2012) (I for short), MSU-MFSD (Wen, Han, and Jain 2015) (M for short) and Oulu-NPU (Boulkenafet et al. 2017) (O for short).
Researcher Affiliation Industry Haixiao Yue*, Keyao Wang*, Guosheng Zhang*, Haocheng Feng, Junyu Han, Errui Ding, Jingdong Wang Department of Computer Vision Technology(VIS), Baidu Inc. {yuehaixiao, wangkeyao, zhangguosheng, fenghaocheng, hanjunyu, dingerrui}@baidu.com, wangjingdong@outlook.com
Pseudocode Yes Algorithm 1: Training Procedure of CDFTN
Open Source Code Yes Code and models are available at https://github.com/vis-face/CDFTN.
Open Datasets Yes Experiments Databases We provide our evaluations on four publicly available databases for cross-domain FAS: CASIA-MFSD (Zhang et al. 2012) (C for short), Replay-Attack (Chingovska, Anjos, and Marcel 2012) (I for short), MSU-MFSD (Wen, Han, and Jain 2015) (M for short) and Oulu-NPU (Boulkenafet et al. 2017) (O for short).
Dataset Splits No The paper mentions utilizing 'the whole source domain dataset and the training set of target domain' for training, but it does not explicitly describe a validation dataset split or how it is used for hyperparameter tuning or early stopping. It implicitly describes a training process but no distinct validation split.
Hardware Specification No The paper describes software and training parameters (e.g., Adam optimizer, learning rate, batch size, epochs) but does not specify any hardware details like GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper mentions using specific optimizers (Adam) and referring to models like ResNet-18 and LGSC, along with the Dlib toolbox for face detection. However, it does not provide specific version numbers for any of these software components or libraries, which is necessary for reproducible software dependencies.
Experiment Setup Yes In Stage 1, the learning rate is set as 1 10 3 and betas of optimizer are set to (0.5, 0.999); we choose values of λ1, λ2, λ3, λ4 as 1, 1, 10, 10, respectively; the batch size is set to be 2 and the training process lasts for 30 epochs. In Stage 2, we set α1, α2, α3 to be the same as (Feng et al. 2020). During training, the batch size is set to be 32 and the classifier would be trained for 5 epochs.