DeblurSR: Event-Based Motion Deblurring under the Spiking Representation

Authors: Chen Song, Chandrajit Bajaj, Qixing Huang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experimental evaluations on two benchmark datasets.
Researcher Affiliation Academia The University of Texas at Austin, Austin, TX 78712, USA
Pseudocode No The paper describes the prediction algorithm and overall pipeline through text and diagrams (Figure 3), but it does not include a dedicated pseudocode or algorithm block.
Open Source Code Yes We refer interested readers to our open-source Git Hub repository for implementation details.
Open Datasets Yes The REalistic and Dynamic Scenes (REDS) (Nah et al. 2019) dataset is a popular dataset... The original REDS dataset contains sharp videos with various real-world contents released under the CC BY 4.0 license. The High Quality Frames (HQF) (Stoffregen et al. 2020) dataset is another benchmark recently developed... The dataset is available for public download...
Dataset Splits Yes We then employ the official training and validation splits to train and test our model, respectively.
Hardware Specification Yes On REDS, it takes 100 hours to train E-CIR for 50 epochs using three Tesla V100 GPUs. By contrast, Deblur SR only requires 72 hours and two of the same GPUs under the identical training setting
Software Dependencies No We implement Deblur SR under Py Torch (Paszke et al. 2019) and utilize ADAM (Kingma and Ba 2014) to train the network for 50 epochs. No specific version numbers for PyTorch or other libraries are provided.
Experiment Setup Yes We set the initial learning rate to 0.0001 and reduce the learning rate by half after 20 and 40 epochs, respectively. The number of line segments in the spiking representation is n = 10. The dimension of spatial kernels is k = 3. The number of histogram bins is m = 26. The size of image and coordinate embeddings is d = 256.