Generalizable Representation Learning for Mixture Domain Face Anti-Spoofing
Authors: Zhihong Chen, Taiping Yao, Kekai Sheng, Shouhong Ding, Ying Tai, Jilin Li, Feiyue Huang, Xinyu Jin1132-1139
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that the proposed method outperforms conventional DG-based face anti-spoofing methods, including those utilizing domain labels. |
| Researcher Affiliation | Collaboration | 1College of Information Science & Electronic Engineering, Zhejiang University, 2 Youtu Lab, Tencent |
| Pseudocode | Yes | Algorithm 1 The optimization strategy of our D2AM |
| Open Source Code | No | The paper does not provide explicit links to open-source code or state that code will be released. |
| Open Datasets | Yes | Four public face anti-spoofing datasets are utilized to evaluate the effectiveness of our method: OULUNPU (Boulkenafet et al. 2017) (denoted as O), CASIAFASD (Zhang et al. 2012) (denoted as C), Idiap Replay Attack (Chingovska, Anjos, and Marcel 2012) (denoted as I), and MSU-MFSD (Wen, Han, and Jain 2015) (denoted as M). |
| Dataset Splits | No | The paper mentions 'meta-train' and 'meta-test' domains but does not explicitly describe a separate validation split or its specific parameters (percentages, counts, or methodology) for hyperparameter tuning in the main text. |
| Hardware Specification | No | The paper does not specify the hardware used for experiments, such as GPU models or CPU specifications. |
| Software Dependencies | No | Our method is implemented via Py Torch and trained with Adam optimizer. No version numbers for PyTorch or Adam are specified. |
| Experiment Setup | Yes | The learning rates α, β are set as 1e3, 1e-4, respectively, and the prior distribution for MMD is defined as the standard normal distribution. For other hyperparameters, we set λp as 0.1 and λm as 0.05. In our method, K determines the number of subdomains that the model needs to be divided. We found that converting the convolutional features extracted by pre-trained Res Net into domain features for clustering can clearly divide the sample into several clusters, so we can determine the value of K as 3. |