Are Watermarks Bugs for Deepfake Detectors? Rethinking Proactive Forensics
Authors: Xiaoshuai Wu, Xin Liao, Bo Ou, Yuling Liu, Zheng Qin
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness of the proposed Adv Mark, leveraging robust watermarking to fool Deepfake detectors, which can help improve the accuracy of downstream Deepfake detection without tuning the in-the-wild detectors. We believe this work will shed some light on the harmless proactive forensics against Deepfake. |
| Researcher Affiliation | Academia | Xiaoshuai Wu , Xin Liao , Bo Ou , Yuling Liu , Zheng Qin College of Computer Science and Electronic Engineering, Hunan University, Changsha, China {shinewu, xinliao, oubo, yuling liu, zqin}@hnu.edu.cn |
| Pseudocode | No | The paper describes methods conceptually and with equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Supplementary material, including more details and experiments of our work, can be found in the supplement at https://github.com/sh1newu/Adv Mark. |
| Open Datasets | Yes | We collect the real faces sourced from Celeb A-HQ [Karras et al., 2018] and resize them to the resolution of 256 × 256. These real faces are then manipulated by recent Deepfake generative models: Sim Swap [Chen et al., 2020] for face swapping, FOMM [Siarohin et al., 2019] for expression reenactment, and Star GAN [Choi et al., 2018] for attribute editing. We further utilize the entire synthesized faces provided by Style GAN [Karras et al., 2019], which are also resized to 256 × 256. |
| Dataset Splits | Yes | More specifically, there are equal numbers of faces in real and fake subsets, and we divide them into training, validation, and testing, respectively, referencing the official split 24183/2993/2824. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. It only mentions the training process details like epochs and batch size. |
| Software Dependencies | No | The paper mentions using MBRS and Sep Mark and following their original implementations for hyper-parameters, but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, specific libraries). |
| Experiment Setup | Yes | For the hyper-parameter settings of MBRS and Sep Mark, we strictly follow their original implementations. For example, besides the difference in the length of the watermark bits (256-bit for MBRS and 128-bit for Sep Mark), the noise layers they use are also inconsistent. To be specific, MBRS includes Identity, JPEG, and simulated JPEG, while Sep Mark contains all the noises listed in Table 3. Therefore, we will compare our Adv Mark with the respective baselines for each to mitigate the model bias introduced by different backbones. Lastly, the whole fine-tuning lasted for 10 epochs with a batch size of 8, and we set the weight of the fooling loss λ4 to 0.1. |