Removing Interference and Recovering Content Imaginatively for Visible Watermark Removal

Authors: Yicheng Leng, Chaowei Fang, Gen Li, Yixiang Fang, Guanbin Li

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical validations, spanning two large-scale datasets, affirm the superiority of our approach against contemporary methodologies.
Researcher Affiliation Collaboration Yicheng Leng1, 2, Chaowei Fang1*, Gen Li1, 3, Yixiang Fang2, Guanbin Li4, 5 1 School of Artificial Intelligence, Xidian University, Xi an, China 2 School of Data Science, The Chinese University of Hong Kong, Shenzhen, China 3 Afirstsoft, Shenzhen, China 4 School of Computer Science and Engineering, Research Institute of Sun Yat-sen University in Shenzhen, Sun Yat-sen University, Guangzhou, China 5 Guang Dong Province Key Laboratory of Information Security Technology
Pseudocode No The paper describes the methodology using text and diagrams (e.g., Fig. 3, Fig. 4), but does not include explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes HWVOC: The background images for this dataset are collected from PASCAL VOC2012 (Everingham et al. 2015).
Dataset Splits No The paper specifies training and testing splits (e.g., '60,000 and 2,500 watermarked images are generated for training and testing, respectively' for HWVOC), but does not explicitly mention a separate validation split or its size.
Hardware Specification No The paper does not specify the exact hardware components (e.g., specific GPU or CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using PyTorch (Paszke et al. 2019) and Adam optimizer (Kingma and Ba 2014) but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes We train the model for 100 epochs, utilizing pretrained SLBR parameters. We adopt Adam optimizer (Kingma and Ba 2014) with learning rate of 0.001, batch size of 8, β1 of 0.9, and β2 of 0.999. The hyper-parameters used in the training loss are: γ = 1.5, α = 0.75, λ1 = 2, λ2 = 1, and λ3 = 3.