Event-driven Video Deblurring via Spatio-Temporal Relation-Aware Network
Authors: Chengzhi Cao, Xueyang Fu, Yurui Zhu, Gege Shi, Zheng-Jun Zha
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our STRA significantly outperforms several competing methods, e.g., on the HQF dataset, our network achieves up to 1.3 d B in terms of PSNR over the most advanced method. |
| Researcher Affiliation | Academia | University of Science and Technology of China, China chengzhicao@mail.ustc.edu.cn, xyfu@ustc.edu.cn, {zyr, sgg19990910}@mail.ustc.edu.cn, zhazj@ustc.edu.cn |
| Pseudocode | No | The paper provides network diagrams and mathematical equations but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/Chengzhi-Cao/STRA. |
| Open Datasets | Yes | Our Spatio-Temporal Relation-Aware network (STRA) is trained based on the benchmark Go Pro dataset [Nah et al., 2017], composed of synthetic events, 2,103 pairs of blurring frames and sharp clear ground-truth frames. ... For evaluation in real-world events, we utilize HQF dataset [Stoffregen et al., 2020], including both real-world events and ground-truth frames captured from a DAVIS240C [Brandli et al., 2014] |
| Dataset Splits | No | The paper mentions training on the Go Pro dataset (2,103 pairs) and testing on Go Pro testing datasets (1,111 pairs) and HQF dataset, but does not explicitly describe a validation dataset or its split. |
| Hardware Specification | Yes | Our network is implemented using Pytorch on a single NVIDIA RTX 2080Ti GPU. |
| Software Dependencies | No | The paper states 'implemented using Pytorch' but does not specify a version number for PyTorch or any other software libraries used. |
| Experiment Setup | Yes | In the training process, we randomly cropped the sampled frames with the size of 256 256. For data augmentation, each patch was horizontally flipped with the probability of 0.5. We use a batch size of 8 training pairs and ADAM optimizer [Kingma and Ba, 2017] with parameter β1 = 0.9, β2 = 0.999. The maximum training epoch is set to 200, with the initial learning rate 10 4, then decays by 25% every 50 epochs. |