Learning to Super-resolve Dynamic Scenes for Neuromorphic Spike Camera

Authors: Jing Zhao, Ruiqin Xiong, Jian Zhang, Rui Zhao, Hangfan Liu, Tiejun Huang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that Spike SR-Net is promising in super-resolving higher-quality images for spike camera. Experiments on both real-world and synthesized spike data demonstrate the promising performance of the Spike SR-Net. Comparative Results To evaluate our Spike SR-Net, we compare it with the existing spike SR method
Researcher Affiliation Academia 1 Institute of Digital Media, School of Computer Science, Peking University 2 National Engineering Research Center of Visual Technology (NERCVT), Peking University 3 School of Electronic and Computer Engineering, Peking University Shenzhen Graduate School 4 Center for Biomedical Image Computing and Analytics, University of Pennsylvania 5 Beijing Academy of Artificial Intelligence {jzhaopku, rqxiong, zhangjian.sz, tjhuang}@pku.edu.cn, ruizhao@stu.pku.edu.cn, hfliu@upenn.edu
Pseudocode No The paper describes the components and operations of its proposed network in detail, including mathematical formulations, but it does not include a clearly labeled pseudocode block or algorithm.
Open Source Code No The paper does not provide any statement about making its source code publicly available, nor does it include a link to a code repository.
Open Datasets Yes We use the images from DIV2K(Agustsson and Timofte 2017) and the videos from REDS (Nah et al. 2020) and 4K1000FPS (Sim, Oh, and Kim 2021) as the virtual scenes. The training dataset consists of 600 spike streams, which are generated based on all the three datasets to enhance diversity.
Dataset Splits No The paper mentions a 'training dataset' and 'testing datasets' but does not explicitly specify a validation set split or its size/percentage, nor does it refer to predefined splits that include a validation set.
Hardware Specification Yes We use Adam optimizer and implement our experiments using Py Torch with two GTX 1080Ti GPUs.
Software Dependencies No The paper mentions using 'Py Torch' for implementation but does not specify a version number for this or any other software dependency, which is necessary for reproducibility.
Experiment Setup Yes In our implementation, four residual blocks are used in kernel predictor. The stage number of super-resolver is set to 4. The loss function is defined as: t=1 X(t) k Ik . Ik is the HR ground-truth at time k. When t < T, λt is set to 0.1. Otherwise, λt is set to 1. We crop the spike frames into 40 40 patches and set the batch size to 6. During training, data augmentation is performed by randomly rotating 90 , 180 , 270 and horizontally flipping. The model is trained for 30 epochs.