OST: Improving Generalization of DeepFake Detection via One-Shot Test-Time Training

Authors: Liang Chen, Yong Zhang, Yibing Song, Jue Wang, Lingqiao Liu

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive results across several benchmark datasets demonstrate that our approach performs favorably against existing arts in terms of generalization to unseen data and robustness to different post-processing steps. and 4 Experiments This section first presents the setups and then shows extensive experimental results to demonstrate the superiority of our approach.
Researcher Affiliation Collaboration Liang Chen1 Yong Zhang2 Yibing Song2 Jue Wang2 Lingqiao Liu1 1 The University of Adelaide 2 Tencent AI Lab
Pseudocode Yes Algorithm 1 One-shot online training and Algorithm 2 Offline meta-training
Open Source Code Yes Code is available at https://github.com/liangchen527/OST.
Open Datasets Yes Following the protocols in existing deepfake detection methods [28, 35, 57], we use the data in the Faceforencis++ (FF++) dataset [43] for training.
Dataset Splits Yes This dataset contains 1000 videos, in which 720 videos are used for training, 140 videos are reserved for verification, and the rest are used for test.
Hardware Specification No The main paper states See the supplementary material for hardware specifications, meaning they are not provided in the main text.
Software Dependencies No The paper mentions software components like Xception, DLIB, and Adam optimizer but does not specify their version numbers.
Experiment Setup Yes We use the Adam optimizer [26] for optimizing the network with β1 = 0.9 and β1 = 0.999, and the meta-batch size is set to be 20. The learning rates for the inner update (i.e. γ in Eq. (1) and (2)) and meta update (i.e. λ in Eq. (2)) are fixed as 0.0005 and 0.0002 for both the offline and online training phases.