Model Watermarking for Image Processing Networks
Authors: Jie Zhang, Dongdong Chen, Jing Liao, Han Fang, Weiming Zhang, Wenbo Zhou, Hao Cui, Nenghai Yu12805-12812
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate the robustness of the proposed watermarking mechanism, which can resist surrogate models learned with different network structures and objective functions. |
| Researcher Affiliation | Collaboration | 1University of Science and Technology in China, 2Microsoft Cloud AI, 3City University of Hong Kong |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the described methodology. |
| Open Datasets | Yes | For image deraining, we use 12100 images from the PASCAL VOC dataset as target domain B, and use the synthesis algorithm in (Zhang and Patel 2018) to generate rainy images as domain A. [...] Similarly, for X-ray image debone, we select 6100 high-quality chest X-ray images from the open dataset chestx-ray8 (Wang et al. 2017) |
| Dataset Splits | Yes | These images are split into three parts: 6000 both for the initial and adversarial training, 6000 to train the surrogate model and 100 for testing. Similarly, for X-ray image debone, [...] They are also divided into three parts: 3000 both for the initial and adversarial training, 3000 to train the surrogate model and 100 for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions general software components and network architectures (e.g., UNet, Patch GAN) but does not provide specific version numbers for any libraries or frameworks. |
| Experiment Setup | Yes | By default, λ, λ1, λ2, λ4, λ5, λ6 all equal to 1 and λ3 = 0.01. [...] In our method, we adopt the UNet (Ronneberger, Fischer, and Brox 2015) as the default network structure of H and SM... For the discriminator D, we adopt the Patch GAN (Isola et al. 2017) by default. [...] The objective loss function of our method consists of two parts: the embedding loss Lemd and the extracting loss Lext, i.e., L = Lemd + λ Lext, (4) |