Feature Generation and Hypothesis Verification for Reliable Face Anti-spoofing
Authors: Shice Liu, Shitao Lu, Hongyi Xu, Jing Yang, Shouhong Ding, Lizhuang Ma1782-1791
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show our framework achieves promising results and outperforms the state-of-the-art approaches on extensive public datasets. |
| Researcher Affiliation | Collaboration | Shice Liu1*, Shitao Lu1,2*, Hongyi Xu1,3, Jing Yang1 , Shouhong Ding1 , Lizhuang Ma2,3 1Youtu Lab, Tencent, Shanghai, China 2East China Normal University, Shanghai, China 3Shanghai Jiao Tong University, Shanghai, China |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source code is available at https://github.com/lustoo/FGHV. |
| Open Datasets | Yes | Above all, we conduct the cross-dataset testing on four public datasets, i.e., OULU-NPU (denoted as O) (Boulkenafet et al. 2017), CASIA-MFSD (denoted as C) (Zhang et al. 2012), Idiap Replay-Attack (denoted as I) (Chingovska et al. 2012) and MSU-MFSD (denoted as M) (Wen et al. 2015). After that, the cross-type testing is carried out on the rich-type dataset, i.e., Si W-M (Liu et al. 2019). |
| Dataset Splits | No | The paper states 'select one dataset for testing and the other three datasets for training' for cross-dataset testing and 'select out one attack type as the unknown testing type and treat the others as the known training types' for cross-type testing, but it does not explicitly provide details about a separate validation split or how it's derived. |
| Hardware Specification | Yes | All experiments are conducted via Py Torch on a 32GB Tesla-V100 GPU. |
| Software Dependencies | No | The paper mentions using 'Py Torch' but does not specify its version number or versions of other software dependencies. |
| Experiment Setup | Yes | During the training period, the framework is trained with SGD optimizer where the momentum is 0.9 and the weight decay is 5e-4. The learning rate is initially 1e-3 and drops to 1e-4 after 50 epochs. The hyper-parameters λ1 and λ2 are both set to 1. |