Direction-Aware Video Demoiréing with Temporal-Guided Bilateral Learning

Authors: Shuning Xu, Binbin Song, Xiangyu Chen, Jiantao Zhou

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our video demoir eing method outperforms state-of-the-art approaches by 2.3 d B in PSNR, and also delivers a superior visual experience. ... We evaluate the effectiveness of our proposed methods using the VDmoir e dataset (Dai et al. 2022). ... Quantitative Results The performance of demoir eing is quantitatively measured using PSNR, SSIM, and LPIPS. In Table 1, our proposed DTNet achieves leading video demoir eing performance on all four datasets. ... Ablation Study Table 3 presents an assessment of the effectiveness of our proposed FDDA and TDR through ablation experiments involving diverse combinations of these foundational components.
Researcher Affiliation Collaboration 1State Key Laboratory of Internet of Things for Smart City, Department of Computer and Information Science, University of Macau 2Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
Pseudocode No No pseudocode or algorithm blocks are present.
Open Source Code No No explicit statement about releasing code or a link to a code repository is found in the paper.
Open Datasets Yes We evaluate the effectiveness of our proposed methods using the VDmoir e dataset (Dai et al. 2022). This dataset consists of 290 clean source videos and the corresponding moir ed videos.
Dataset Splits No The paper does not explicitly provide training/test/validation dataset splits, only mentions using the VDmoiré dataset.
Hardware Specification Yes In total, we train our model with batch size 16 on four NVIDIA Tesla A100 GPUs.
Software Dependencies No The paper does not provide specific version numbers for software dependencies. It mentions using "Adam W optimizer" but no library versions.
Experiment Setup Yes We adopt the Adam W optimizer with β1 = 0.9 and β2 = 0.999 to train the model. The learning rate is initialized as 4 10 4. We apply the cyclic cosine annealing learning rate schedule (Loshchilov and Hutter 2016), which allows partial warm restart optimization, generally improving the convergence rate in gradient-based optimization. In total, we train our model with batch size 16 on four NVIDIA Tesla A100 GPUs.