CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes
Authors: Hao Huang, Yongtao Wang, Zhaoyu Chen, Yuze Zhang, Yuheng Li, Zhi Tang, Wei Chu, Jingdong Chen, Weisi Lin, Kai-Kuang Ma989-997
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results demonstrate that the proposed CMUAWatermark can effectively distort the fake facial images generated by multiple deepfake models while achieving a better performance than existing methods. |
| Researcher Affiliation | Collaboration | 1Peking University 2State Key Laboratory of Media Convergence Production Technology and Systems 3Fudan University 4Ant Group 5Nanyang Technological University |
| Pseudocode | Yes | Specifically, as shown in Algorithm 1, during the proposed cross-model universal attacking process, batches of input images iteratively go through the PGD (Madry et al. 2018) attack to generate adversarial perturbations, which then go through a two-level perturbation fusion mechanism to combine into a fused CMUA-Watermark that serves as the initial perturbation for the next model. |
| Open Source Code | Yes | Our code is available at https://github.com/VDIGPKU/CMUA-Watermark. |
| Open Datasets | Yes | In our experiments, we use the Celeb A (Liu et al. 2015) test set as the main dataset, which contains 19962 facial images. We use the first 128 images in the set as training images and evaluate our method on all facial images of the Celeb A test set and the LFW (Huang et al. 2007) dataset to ensure credibility. |
| Dataset Splits | Yes | We use the first 128 images in the set as training images and evaluate our method on all facial images of the Celeb A test set and the LFW (Huang et al. 2007) dataset to ensure credibility. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions "an open-source liveness detection system Hyper FAS" with a link (https://github.com/zeusees/Hyper FAS) but does not specify its version or any other software dependencies with version numbers. |
| Experiment Setup | Yes | During the process of searching for the step sizes, the maximum number of iterations is 1k, and the search space of the step size for each model is [0, 10]. We first search for the step sizes with batchsize = 16 and then use the searched step sizes to conduct cross model attacks with batchsize = 64. |