Motion Deblurring via Spatial-Temporal Collaboration of Frames and Events

Authors: Wen Yang, Jinjian Wu, Jupo Ma, Leida Li, Guangming Shi

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both synthetic and real-world datasets demonstrate that our method achieves state-of-the-art performance. Project website: https://github.com/wyangvis/STCNet.
Researcher Affiliation Academia Wen Yang1,2, Jinjian Wu1,2 , Jupo Ma1,2 , Leida Li1, Guangming Shi1,2 1School of Artificial Intelligence, Xidian University, Xi an 710071, China 2Pazhou Lab, Huangpu, 510555, China wen.yang@stu.xidian.edu.cn, {jinjian.wu, majupo, ldli, gmshi}@xidian.edu.cn
Pseudocode No The paper illustrates its network architecture with figures but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Project website: https://github.com/wyangvis/STCNet.
Open Datasets Yes Our STCNet is evaluated on 1) Synthetic dataset. Go Pro (Nah, Hyun Kim, and Mu Lee 2017) and DVD (Su et al. 2017) datasets are widely adopted for image-only and event-based deblurring such as (Sun et al. 2022), which contains synthetic blurring images and sharp ground-truth images, as well as synthetic events generated by simulation algorithm ESIM (Rebecq, Gehrig, and Scaramuzza 2018).
Dataset Splits No For the REB dataset, "There are 60 videos of REB, 40 of which are used for training and 20 for testing." The paper does not explicitly mention a validation split for any of the datasets used, nor does it specify the standard splits for Go Pro or DVD datasets if they include a validation set.
Hardware Specification Yes Our method is implemented using Pytorch on NVIDIA RTX 3090 GPU.
Software Dependencies No The paper mentions 'Pytorch' but does not specify its version number, nor does it list other software dependencies with their respective versions.
Experiment Setup Yes The size of training patch is 256 256 with minibatch size of 8. The optimizer is ADAM (Kingma and Ba 2015), and the learning rate is initialized at 2 10 4 and decreased by the cosine learning rate strategy with a minimum learning rate of 10 6. For data augmentation, each patch is horizontally flipped with the probability of 0.5.