Dual Reweighting Domain Generalization for Face Presentation Attack Detection
Authors: Shubao Liu, Ke-Yue Zhang, Taiping Yao, Kekai Sheng, Shouhong Ding, Ying Tai, Jilin Li, Yuan Xie, Lizhuang Ma
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments and visualizations are presented to demonstrate the effectiveness and interpretability of our method against the state-of-the-art competitors. |
| Researcher Affiliation | Collaboration | 1East China Normal University, China 2Youtu Lab, Tencent, Shanghai, China |
| Pseudocode | Yes | Algorithm 1: The optimization strategy of our DRDG |
| Open Source Code | No | The paper does not contain an explicit statement about releasing the source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Utilizing four public databases, to evaluate our method: OULU-NPU (denoted as O) [Boulkenafet et al., 2017], CASIA-FASD (denoted as C) [Zhang et al., 2012], MSU-MFSD (denoted as M) [Wen et al., 2015] and Idiap Replay-Attack (denoted as I) [Chingovska et al., 2012]. |
| Dataset Splits | No | Concretely, we select one dataset for testing and the remaining three for training, then four testing tasks are obtained: O&C&M to I, O&C&I to M, O&M&I to C and I&C&M to O. The paper describes how datasets are split into training and testing sets but does not explicitly provide details on a separate validation split. |
| Hardware Specification | Yes | Our method is implemented via Py Torch on 11G NVIDIA 2080Ti GPUs with Linux OS |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'Linux OS' but does not provide specific version numbers for these software components. It also mentions 'Adam optimizer' but this is an algorithm, not a versioned software dependency. |
| Experiment Setup | Yes | The learning rates α, β are set as 1e-3, 1e-4, respectively. We extract RGB and HSV channels of images, thus the input size is 256 256 6. In training phase, the balance coefficients λ1 and λ2 are set to 10 and 0.1 respectively. In our method, the K in Algorithm 1 is an important hyperparameter, which determines the training pace of large domain-biased samples, and we set it as 5 according to our experiments. |