Frequency-Aware Deepfake Detection: Improving Generalizability through Frequency Space Domain Learning
Authors: Chuangchuang Tan, Yao Zhao, Shikui Wei, Guanghua Gu, Ping Liu, Yunchao Wei
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimentation involving 17 GANs demonstrates the effectiveness of our proposed method, showcasing state-of-the-art performance (+9.8%) while requiring fewer parameters. |
| Researcher Affiliation | Academia | Chuangchuang Tan1,2, Yao Zhao1,2*, Shikui Wei1,2, Guanghua Gu3,4, Ping Liu5, Yunchao Wei1,2 1Institute of Information Science, Beijing Jiaotong University 2Beijing Key Laboratory of Advanced Information Science and Network Technology 3School of Information Science and Engineering, Yanshan University 4Hebei Key Laboratory of Information Transmission and Signal Processing 5Center for Frontier AI Research, IHPC, A*STAR, Singapore |
| Pseudocode | No | The paper contains architectural diagrams but no explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/chuangchuangtan/Freq Net Deepfake Detection. |
| Open Datasets | Yes | To ensure a consistent basis for comparison, we employ the training set of Foren Synths (Wang et al. 2020) to train the detectors, aligning with baselines (Wang et al. 2020; Jeong et al. 2022a,c). |
| Dataset Splits | No | The paper describes training and test sets but does not explicitly provide details about a validation set split or methodology for its use. |
| Hardware Specification | Yes | We employ the Py Torch framework (Paszke et al. 2019) for the implementation of our method, utilizing the computational power of the Nvidia Ge Force RTX 3090 GPU. |
| Software Dependencies | No | We employ the Py Torch framework (Paszke et al. 2019) for the implementation of our method... For the critical task of Fast Fourier Transform (FFT), we leverage the torch.fft.fftn function within the Py Torch library. |
| Experiment Setup | Yes | During the training process, we utilize the Adam optimizer (Kingma et al. 2015) with an initial learning rate of 2 10 2. The batch size is set at 32, and we train the model for 100 epochs. A learning rate decay strategy is employed, reducing the learning rate by twenty percent after every ten epochs. |