DiffusionFake: Enhancing Generalization in Deepfake Detection via Guided Stable Diffusion
Authors: ke sun, Shen Chen, Taiping Yao, Hong Liu, Xiaoshuai Sun, Shouhong Ding, Rongrong Ji
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiment |
| Researcher Affiliation | Collaboration | 1 Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China. 2 Youtu Lab, Tencent, P.R. China. 3 Osaka University, Japan. |
| Pseudocode | No | The paper does not contain any sections or figures explicitly labeled 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | Code are available in https://github.com/skJack/DiffusionFake.git. |
| Open Datasets | Yes | Dataset. To evaluate the generalization ability of Diffusion Fake, we conduct experiments on several challenging datasets: (1) Face Forensics++ (FF++) [31]: (2) Celeb-DF [23]: (3) Deep Fake Detection (DFD): (4) DFDC Preview (DFDC-P) [8]: (5) Wild Deepfake [54]: (6) Diff Swap [6]: |
| Dataset Splits | No | The paper mentions following 'the data split strategy used in Face Forensics++ [31]' but does not explicitly state the train/validation/test percentages or sample counts for reproduction within its own text. |
| Hardware Specification | No | The paper does not specify the exact hardware used for training or inference, such as specific GPU or CPU models, or details about the computing environment. |
| Software Dependencies | Yes | During training, we utilize a pre-trained Stable Diffusion 1.5 model with frozen parameters. |
| Experiment Setup | Yes | Input images are resized to 224x224 pixels. We employ the Adam optimizer with a learning rate of 1e-5 and a batch size of 32. The model is trained for 20 epochs. The hyperparameters λs and λt are set to 0.7 and 1, respectively. |