WAVES: Benchmarking the Robustness of Image Watermarks

Authors: Bang An, Mucong Ding, Tahseen Rabbani, Aakriti Agrawal, Yuancheng Xu, Chenghao Deng, Sicheng Zhu, Abdirisak Mohamed, Yuxin Wen, Tom Goldstein, Furong Huang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our novel, comprehensive evaluation reveals previously undetected vulnerabilities of several modern watermarking algorithms. We envision WAVES as a toolkit for the future development of robust watermarks.
Researcher Affiliation Collaboration 1University of Maryland, College Park 2SAP Labs, LLC.
Pseudocode No The paper describes methods and processes through narrative text and figures (e.g., Figure 2 for evaluation workflow) but does not include any explicitly labeled pseudocode blocks or algorithms.
Open Source Code Yes The project is available at https://wavesbench.github.io/.
Open Datasets Yes We utilize three datasets for the non-watermarked reference images in our evaluation: Diffusion DB, MS-COCO, and DALL E3, each comprising 5000 reference images and prompts.
Dataset Splits Yes In all three settings, we use 5000 images (2500 images per class) for validation (derived from the same source as the training set), and the training yields nearly 100% validation accuracy in all cases.
Hardware Specification No The paper describes the software and datasets used for experiments but does not provide specific details about the hardware, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions software components like 'torchvision library' and 'ResNet18' but does not specify version numbers for these or other key software dependencies required for reproducibility.
Experiment Setup Yes We conduct the attack using a range of perturbation budgets ϵ, specifically {2/255, 4/255, 6/255, 8/255}. All the attacks are configured with a step size of α = 0.05 ϵ and the number of total iterations of 200.