Learning Scale-Aware Spatio-temporal Implicit Representation for Event-based Motion Deblurring
Authors: Wei Yu, Jianing Li, Shengping Zhang, Xiangyang Ji
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that our SASNet outperforms stateof-the-art methods on both synthetic Go Pro and real H2D datasets, especially in high-speed motion scenarios. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, Harbin Institute of Technology, Weihai, China 2School of Computer Science, Peking University, Beijing, China. 3Department of Automation, Tsinghua University, Beijing, China. |
| Pseudocode | No | The paper provides figures of network architectures (Figure 2, 3, 4) and describes the method in text, but no explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code and dataset are available at https://github.com/aipixel/SASNet. |
| Open Datasets | Yes | Code and dataset are available at https://github.com/aipixel/SASNet. |
| Dataset Splits | Yes | Go Pro Dataset. It consists of 3214 sharp images with resolutions of 1280 720, in which 2103 are used for training and 1111 for testing. |
| Hardware Specification | Yes | The proposed SASNet is implemented by Py Torch and trained on an NVIDIA Ge Force RTX 3090 for 100 epochs with 8 batch sizes. |
| Software Dependencies | No | The proposed SASNet is implemented by Py Torch... In Py Torch (Paszke et al., 2019)... |
| Experiment Setup | Yes | The training patch size is set to 256 256 and augmented by horizontal and vertical flipping to enhance its robustness. We use the Adam optimizer (Kingma & Ba, 2014) with an initial learning rate of 10 4 that linear decays by 0.5 for every 30 epoch and only employ L1 loss as the training loss. |