Pluggable Watermarking of Deepfake Models for Deepfake Detection

Authors: Han Bao, Xuhong Zhang, Qinying Wang, Kangming Liang, Zonghui Wang, Shouling Ji, Wenzhi Chen

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show our method successfully detects Deepfakes with an average accuracy exceeding 94% even in heavy lossy channels.
Researcher Affiliation Academia 1School of Software Technology, Zhejiang University. 2College of Computer Science, Zhejiang University 3College of Engineering, Zhejiang University.
Pseudocode No The paper describes methods through text and diagrams (Figure 1-4) but does not include explicit pseudocode or algorithm blocks.
Open Source Code Yes The source code can be found at https: //github.com/Guai Zao/Pluggable-Watermarking
Open Datasets Yes we use the Celeb A [Liu et al., 2015] dataset to train the watermark of these Deepfake models. For Style GAN and Style GAN2, we train these models by random style vectors, and the number of vectors is the same as in Celeb A images. In the evaluation process, we randomly select 3000 images in FFHQ [Karras et al., 2019] for positive samples,.
Dataset Splits No The paper mentions using Celeb A for training and FFHQ for positive samples in evaluation, but does not explicitly specify a separate validation dataset split (e.g., by percentage or count) for reproducibility.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU, CPU models, or memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., deep learning frameworks or libraries).
Experiment Setup Yes We train the mask for about 1e5 iterations with batchsize 32 and learning rate 1e-2. We train the watermarked models about 5e7 iterations with batch size 32, learning rate 1e-7 for watermarked parameters, and learning rate 1e-5 for the extractor.