Recaptured Raw Screen Image and Video Demoiréing via Channel and Spatial Modulations

Authors: Yijia Cheng, Xin Liu, Jingyu Yang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that our method achieves state-of-the-art performance for both image and video demoriéing.
Researcher Affiliation Academia Huanjing Yue Tianjin University Tianjin, China huanjing.yue@tju.edu.cn Yijia Cheng Tianjin University Tianjin, China yijia_cheng@tju.edu.cn Xin Liu Tianjin University Tianjin, China Lappeenranta-Lahti University of Technology LUT Lappeenranta, Finland linuxsino@gmail.com Jingyu Yang Tianjin University Tianjin, China yjy@tju.edu.cn
Pseudocode No The paper does not contain a pseudocode block or a clearly labeled algorithm section.
Open Source Code Yes We have released the code and dataset in https://github.com/tju-chengyijia/VD_raw.
Open Datasets Yes We have released the code and dataset in https://github.com/tju-chengyijia/VD_raw. For image demoiréing, we utilize the dataset constructed by [34], which contains raw image inputs. For video demoiréing, we utilize the dataset constructed in Sec. 3, which contains 300 video clips. We randomly select 50 video clips to serve as the testing set.
Dataset Splits Yes For video demoiréing, we utilize the dataset constructed in Sec. 3, which contains 300 video clips. We randomly select 50 video clips to serve as the testing set.
Hardware Specification Yes The proposed method is implemented in Py Torch and trained with two NVIDIA 3090 GPUs.
Software Dependencies No The proposed method is implemented in Py Torch. However, no specific version number for Py Torch or other software dependencies is provided.
Experiment Setup Yes For video-demoiréing, the weighting parameters λ1-λ5 in Eq. 1 and 2 are set to 0.5, 1, 5, 1, 1, respectively. The first stage training starts with a learning rate of 2e-4, which decreases to 5e-5 at the 37th epoch. The baseline reaches convergence after 40 epochs. In the second stage, the initial learning rate is 2e-4, which decreases to 1e-4 and 2.5e-5 at the 10th and 30th epochs. All the network parameters are optimized by Adam. The batch size is set to 14 and the patch size is set to 256. Image demoriéing shares similar training strategies.