MuST: Robust Image Watermarking for Multi-Source Tracing
Authors: Guanjie Wang, Zehua Ma, Chang Liu, Xi Yang, Han Fang, Weiming Zhang, Nenghai Yu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate the excellent performance of Mu ST in tracing sources of image materials from the composite images compared with SOTA watermarking methods, which could maintain the extraction accuracy above 98% to trace the sources of at least 3 different image materials while keeping the average PSNR of watermarked image materials higher than 42.51 dB. |
| Researcher Affiliation | Academia | Guanjie Wang1, Zehua Ma*1, Chang Liu1, Xi Yang1 Han Fang2, Weiming Zhang*1, Nenghai Yu1 1University of Science and Technology of China 2National University of Singapore wangguanjie@mail.ustc.edu.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We released our code on https://github.com/MrCrims/Mu ST. |
| Open Datasets | Yes | To facilitate the network training of our proposed scheme, we constructed a new dataset, namely the Single-Object Image Material Dataset (SOIM). It comprises 6.5k image materials with white background, which were selected from JD Product-10k (Bai et al. 2020), Google Scanned-Objects (Downs et al. 2022), and Grocery Store Dataset (Klasson, Zhang, and Kjellstr om 2019). Furthermore, each image material in SOIM is paired with a corresponding mask of the primary component, as shown in the first line in Figure 3. These masks are adopted to segment the primary component, which is placed on the background image to generate the composite image in the proposed multi-source image composing simulation model. Some examples of background images are shown in the second line of Figure 3. In our experiments, 6k image materials were used for training and 0.5k were dedicated to testing. Besides, to validate the generalization capability of Mu ST across different types of image materials, the CASIA V2.0 dataset (Dong, Wang, and Tan 2013; Pham et al. 2019) and Stanford Background dataset (ICCV09) (Gould, Fulton, and Koller 2009) are used as extra testing datasets. |
| Dataset Splits | No | The paper states '6k image materials were used for training and 0.5k were dedicated to testing' but does not explicitly provide details about a validation set split. |
| Hardware Specification | Yes | The whole framework is implemented by Py Torch (Paszke et al. 2019) and executed on NVIDIA RTX A6000. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'Kornia' but does not provide specific version numbers for these software dependencies needed to replicate the experiment. |
| Experiment Setup | Yes | The watermark messages are randomly generated sequences of 30 bits. In the proposed multi-source image composing simulation model (MIC) implemented by Kornia(E. Riba and Bradski 2020), the input image materials are of size 3 640 640 pixels and the background images are of size 3 1000 1000. The parameter settings in MIC are as follows: Gaussian blur: σ [0.1, 1] Resize: Scale Rate [0.3, 0.4] Brightness adjustment: [ 0.2, 0.2] Contrast adjustment: [0.8, 1.2] For the loss function in the Eq. (6) , we choose λENC = 0.7, λENC2 = 0.001, λDEC = 2.0. The batch size in the training is set to 3, and the Mu ST models are trained for 500 epochs with an initial learning rate = 0.0001. |