Multi-Domain Incremental Learning for Face Presentation Attack Detection
Authors: Keyao Wang, Guosheng Zhang, Haixiao Yue, Ajian Liu, Gang Zhang, Haocheng Feng, Junyu Han, Errui Ding, Jingdong Wang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our proposed method achieves state-of-the-art performance compared to prior methods of incremental learning. Excitingly, under more stringent setting conditions, our method approximates or even outperforms DA/DG-based methods. |
| Researcher Affiliation | Collaboration | Keyao Wang*1, Guosheng Zhang*1, Haixiao Yue*1, Ajian Liu 2, Gang Zhang1, Haocheng Feng1, Junyu Han1, Errui Ding1, Jingdong Wang1 1Department of Computer Vision Technology (VIS), Baidu Inc 2CBSR&MAIS, Institute of Automation, Chinese Academy of Sciences (CASIA) |
| Pseudocode | Yes | Algorithm 1: The Procedure of MDIL-PAD. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We evaluate the effectiveness of our method on five PAD datasets: OULU-NPU (Boulkenafet et al. 2017) (O for short), CASIA-MFSD (Zhang et al. 2012) (C for short), Idiap Replay Attack (Chingovska, Anjos, and Marcel 2012) (I for short), MSU-MFSD (Wen, Han, and Jain 2015) (M for short), and Si W (Liu, Jourabloo, and Liu 2018) (S for short). |
| Dataset Splits | No | The paper describes training and testing on different datasets in an incremental manner, but it does not specify explicit train/validation/test dataset splits with percentages, sample counts, or cross-validation details for reproduction of the data partitioning. |
| Hardware Specification | No | The paper describes implementation details but does not provide specific hardware information such as CPU or GPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using a 'Stochastic Gradient Descent (SGD) optimizer' and a 'Vi T-B/16' network, but it does not specify version numbers for any software dependencies like programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We train our method using the Stochastic Gradient Descent (SGD) optimizer with a momentum of 0.9, an initial learning rate of 0.01, and a batch size of 48. Input images are resized to 224 224. |