Exploiting Fine-Grained Face Forgery Clues via Progressive Enhancement Learning

Authors: Qiqi Gu, Shen Chen, Taiping Yao, Yang Chen, Shouhong Ding, Ran Yi735-743

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on several datasets show that our method outperforms the state-of-the-art face forgery detection methods. Our method exceeds the other comparison methods and achieves state-of-the-art performance on the common benchmark Face Forensics++ dataset and the newly published Wild Deepfake dataset. The cross-dataset evaluations on three additional challenging datasets prove the generalization ability, while the perturbed evaluations prove the robustness of our method.
Researcher Affiliation Collaboration Qiqi Gu1,2*, Shen Chen2*, Taiping Yao2*, Yang Chen2, Shouhong Ding2 , Ran Yi1,3 1Shanghai Jiao Tong University, 2Youtu Lab, Tencent, 3Mo E Key Lab of Artificial Intelligence, SJTU
Pseudocode No The paper does not contain pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper states 'We implement the proposed framework via open-source Py Torch.', but this indicates the use of an open-source framework, not that the authors' specific implementation code for this paper is open-source or available with a link.
Open Datasets Yes We adopt five widely-used public datasets in our experiments, i.e., Face Forensics++ (Rossler et al. 2019), Wild Deepfake (Zi et al. 2020), Celeb-DF (Li et al. 2020b), Deepfake Detection (Dufour and Gully 2019), Deepfake Detection Challenge (Dolhansky et al. 2020), in which the former two are used for both training and evaluation, while the latter three for cross-dataset evaluation only.
Dataset Splits Yes We follow the official splits by using 720 videos for training, 140 videos for validation, and 140 videos for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU models, or cloud computing instance types with specifications) used for running the experiments.
Software Dependencies No The paper mentions 'We implement the proposed framework via open-source Py Torch.', but it does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes The Effcient Net-B4 (Tan and Le 2019) pre-trained on Image Net was adopted as the backbone of our network, which is trained with Adam optimizer with the learning rate of 2 x 10^-4, the weight decay of 1 x 10^-5, and the batch size of 32. The stride of the sliding window is set to 2 in all experiments.