A Coarse-to-Fine Fusion Network for Event-Based Image Deblurring
Authors: Huan Li, Hailong Shi, Xingyu Gao
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on the Go Pro and REBlur datasets demonstrate that our method achieves state-of-the-art performance for image deblurring task. |
| Researcher Affiliation | Academia | Huan Li1,2 , Hailong Shi1, and Xingyu Gao1, 1Institute of Microelectronics, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China {lihuan, shihailong, gaoxingyu}@ime.ac.cn |
| Pseudocode | No | No structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures) were found. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We assess our approach on two image deblurring datasets: the Go Pro dataset [Nah et al., 2017] and the recently introduced REBlur dataset [Sun et al., 2022]. ... As the dataset solely contains images, we employe the event simulator ESIM [Rebecq et al., 2018] to generate the corresponding events. |
| Dataset Splits | No | The paper specifies training and testing splits for both datasets (Go Pro: 2103 for training, 1111 for testing; REBlur: 486 for training, 983 for testing), but does not explicitly mention a validation split. |
| Hardware Specification | Yes | Training is conducted on 4 NVIDIA 3090 RTX GPUs for 300k iterations. The fine-tuning process on the REBlur dataset involves 2000 iterations on one A100... |
| Software Dependencies | No | The paper mentions using the Adam optimizer and a cosine annealing strategy, but does not provide specific version numbers for software dependencies such as deep learning frameworks (e.g., PyTorch, TensorFlow) or Python. |
| Experiment Setup | Yes | During training, we input cropped images sized at 256 256 and event voxels into the network, incorporating horizontal and vertical flipping for both images and event voxels in heat pixels to augment the dataset. Utilizing the Adam optimizer, we implement a cosine annealing strategy, adjusting the learning rate from 4 10 4 to 1 10 7. Training is conducted on 4 NVIDIA 3090 RTX GPUs for 300k iterations. The fine-tuning process on the REBlur dataset involves 2000 iterations on one A100, initializing the learning rate at 4 10 5. All other parameters and configurations remain consistent with the training on the Go Pro dataset. |