Domain General Face Forgery Detection by Learning to Weight
Authors: Ke Sun, Hong Liu, Qixiang Ye, Yue Gao, Jianzhuang Liu, Ling Shao, Rongrong Ji2638-2646
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on several commonly used deepfake datasets to demonstrate the effectiveness of our method in detecting synthetic faces. |
| Researcher Affiliation | Collaboration | Ke Sun1, Hong Liu2, Qixiang Ye3, Yue Gao4, Jianzhuang Liu5, Ling Shao6, Rongrong Ji1,7 1Media Analytics and Computing Lab, Department of Artificial Intelligence, School of Informatics, Xiamen University, 361005, China 2National Institute of Informatics, Japan 3University of Chinese Academy of Sciences, China 4Tsinghua University, China 5Noah s Ark Lab, Huawei Technologies, China 6Inception Institute of Artificial Intelligence, Abu Dhabi, UAE 7 Institute of Artificial Intelligence, Xiamen University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks with explicit labels like 'Algorithm' or 'Pseudocode'. |
| Open Source Code | Yes | Code and supplemental material are available at https://github.com/sk Jack/LTW. |
| Open Datasets | Yes | To evaluate the capability of our proposed method, we build different benchmarks based on three popular deepfake databases Face Forensics++ (Rossler et al. 2019), Celeb DF (Li et al. 2019), and DFDC (Li et al. 2019). |
| Dataset Splits | Yes | We follow the official division of the dataset, in which 720 videos are used for training, 140 videos for validation and 140 videos for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory, or types of computing resources used for experiments. |
| Software Dependencies | No | The paper mentions using EfficientNet-b0 and Adam optimizer but does not specify version numbers for any software dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | The learning rate α for metatraining and γ for meta-testing is both 0.001 with Adam optimizer. We use a step LR scheduler, where the step-size is 5 and gamma is set to 0.1. The weight-aware network update learning rate φ is 0.001. The hyperparameter β which balances the meta-training and meta-testing is set to 1. And the hyperparameter λ to balance the CE loss and the ICC loss is set to 0.01. The batch size is 25. |