Flow-Based Robust Watermarking with Invertible Noise Layer for Black-Box Distortions
Authors: Han Fang, Yupeng Qiu, Kejiang Chen, Jiyi Zhang, Weiming Zhang, Ee-Chien Chang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the superiority of the proposed framework in terms of visual quality and robustness. Compared with the state-of-the-art architecture, the visual quality (measured by PSNR) of the proposed framework improves by 2d B and the extraction accuracy after JPEG compression (QF=50) improves by more than 4%. Besides, the robustness against black-box distortions can be greatly achieved with more than 95% extraction accuracy. |
| Researcher Affiliation | Academia | National University of Singapore University of Science and Technology of China |
| Pseudocode | No | The paper provides network architectures, equations, and descriptions of processes, but does not include a dedicated 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | 1Source code: https://github.com/QQiuyp/FIN. |
| Open Datasets | Yes | In this paper, the DIV2K (Agustsson and Timofte 2017) training dataset is used for training. The testing dataset we choose is the classical USC-SIPI (Viterbi 1977) image dataset. |
| Dataset Splits | No | The paper states that the DIV2K training dataset is used for training and the USC-SIPI image dataset is used for testing, but it does not specify any train/validation/test splits, percentages, or methodology for splitting the datasets. |
| Hardware Specification | Yes | The framework is implemented by Py Torch (Collobert, Kavukcuoglu, and Farabet 2011) and is run on one NVIDIA RTX 3090ti. |
| Software Dependencies | No | The paper mentions 'Py Torch (Collobert, Kavukcuoglu, and Farabet 2011)' but does not specify its version number or any other software dependencies with their versions. |
| Experiment Setup | Yes | The parameters of λ1 and λ2 are fixed as 1 and 10, respectively. The number of invertible neural blocks in FED n is set as 8 and the number of invertible noise blocks k is set as 8. For parameter optimization of each network, we utilize Adam (Kingma and Ba 2015) with a learning rate of 1e-4 as default hyperparameters. |