Dual Contrastive Learning for General Face Forgery Detection

Authors: Ke Sun, Taiping Yao, Shen Chen, Shouhong Ding, Jilin Li, Rongrong Ji2316-2324

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments and visualizations on several datasets demonstrate the generalization of our method against the state-of-the-art competitors.
Researcher Affiliation Collaboration Ke Sun1, Taiping Yao2, Shen Chen2, Shouhong Ding2 , Jilin Li 2, Rongrong Ji1 1Media Analytics and Computing Lab, Department of Artificial Intelligence, School of Informatics, Xiamen University, 361005, China 2Youtu Lab, Tencent, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our Code is available at https://github.com/Tencent/TFace.git.
Open Datasets Yes To evaluate our method, we conduct experiments on five famous challenging datasets: Face Forensics++ (Rossler et al. 2019), Celeb DF (Li et al. 2019b), DFDC (Dolhansky et al. 2020), DFD, Wild Deepfake (Zi et al. 2020).
Dataset Splits Yes Face Forensics++ (Rossler et al. 2019) is a large-scale forgery face dataset containing 720 videos for training and 280 videos for validation or testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions 'Adam optimizer' and 'Efficient Net-b4' but does not provide specific version numbers for software dependencies or programming frameworks (e.g., PyTorch 1.9, Python 3.8).
Experiment Setup Yes The learning rate is set to 0.001 and the batchsize is set to 32. The Efficient Net-b4 (Tan and Le 2019) pretrained on the Image Net (Deng et al. 2009) is used as our encoders fq and fk. The exponential hyper-parameter β is set to 0.99. The temperature parameter τ of E.q. 3 is set to 0.07 and the query size |M| is set to 30000. In addition, we set 0.9 and 0.5 for prototypes updating parameter α and threshold θ. For the balanced weight φ, we set φ = 0.1 for the first 5 epochs as the warm-up period under the guidance of lce, then the φ is set to 0.5.