Dual-Cross Central Difference Network for Face Anti-Spoofing
Authors: Zitong Yu, Yunxiao Qin, Hengshuang Zhao, Xiaobai Li, Guoying Zhao
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments are performed on four benchmark datasets with three testing protocols to demonstrate our state-of-the-art performance. |
| Researcher Affiliation | Academia | Zitong Yu1 , Yunxiao Qin2 , Hengshuang Zhao3 , Xiaobai Li1 and Guoying Zhao1 1CMVS, University of Oulu 2Northwestern Polytechnical University 3University of Oxford |
| Pseudocode | Yes | Algorithm 1 Patch Exchange Augmentation Input: Face images I with batchsize N, pseudo depth map labels D, augmented ratio γ [0, 1], step number ρ 1 : for each Ii and Di, i = 1, ..., γ N do 2 : for each step ρ do 3 : Randomly select a patch region P within Ii 4 : Randomly select a batch index j, j N 5 : Exchange the image patch Ii(P) = Ij(P) and label patch Di(P) = Dj(P) 6 : end 7: end 8: return augmented I and D |
| Open Source Code | No | The paper states 'Our proposed method is implemented with Pytorch.' but does not provide any explicit statement about releasing source code or a link to a repository. |
| Open Datasets | Yes | Four databases OULU-NPU [Boulkenafet et al., 2017], CASIA-MFSD [Zhang et al., 2012], Replay Attack [Chingovska et al., 2012] and Si W-M [Liu et al., 2019] are used in our experiments. |
| Dataset Splits | Yes | In OULU-NPU dataset, we follow the original protocols and metrics, i.e., Attack Presentation Classification Error Rate (APCER), Bona Fide Presentation Classification Error Rate (BPCER), and ACER for a fair comparison. |
| Hardware Specification | Yes | In the training stage, models are trained with batch size 8 and Adam optimizer on a single V100 GPU. |
| Software Dependencies | No | The paper states 'Our proposed method is implemented with Pytorch.' but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | In the training stage, models are trained with batch size 8 and Adam optimizer on a single V100 GPU. Data augmentations including horizontal flip, color jitter and Cutout are used as baseline. The initial learning rate (lr) and weight decay are 1e-4 and 5e-5, respectively. We train models with maximum 800 epochs while lr halves in the 500th epoch. |