Neuromorphic Event Signal-Driven Network for Video De-raining

Authors: Chengjie Ge, Xueyang Fu, Peng He, Kunyu Wang, Chengzhi Cao, Zheng-Jun Zha

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the superiority of additional dynamic priors provided by the event streams both quantitatively and visually. For example, our network achieves a 1.24d B improvement on Syn Heavy25 (Yang et al. 2019), a 1.18d B improvement on Syn Light25 (Yang et al. 2019), and a 0.52d B improvement on NTURain (Chen et al. 2018) over the previous state-ofthe-art method (Yang et al. 2021) with only 39% of the parameters.
Researcher Affiliation Academia University of Science and Technology of China, China cjge@mail.ustc.edu.cn, xyfu@ustc.edu.cn, {hp0618, kunyuwang, chengzhicao}@mail.ustc.edu.cn, zhazj@ustc.edu.cn
Pseudocode Yes Specifically, we select a Res Block as the approximation operator for solving the dictionary problem in Eq. (10), and a U-Net structure to estimate sparse coefficients like previous methods (Zhu et al. 2022; Wang et al. 2023). To extract motion priors from the event stream, we utilize a novel Spiking Mutual Enhancement (SME) module to extract features in the temporal and spatial domains as denoted in Figure 2(b), and fused them to generate the initial HO. Subsequently, we can separately solve the distribution for each module in the network, and implement the following procedures for the entire network. b DR(t+1) =F 1{(σRCH RCR + µDRI) 1 (σRCH RR + µDRDR(t))}, DR(t+1) =Est Net( b DR(t+1), λDR). HO = SME(Event), HR = min HR HR R R 2 F , HM = HO HR. b DB(t+1) =F 1{(σBCH BCB + µDBI) 1 (σBCH BB + µDBDB(t))}, DB(t+1) =Est Net( b DB(t+1), λDB). Z(t+1) = (HH MHM + γI) 1(γM(t+1) + γΘ(t) + HH MHMO), M(t+1) = Est Net(Z(t+1), δM), Θt+1 = Θt + ρ(M(t+1) Z(t+1)). De-rained results = HM M + (I HM) B. (27)
Open Source Code No The paper does not provide concrete access to source code for the methodology described, nor does it state that code is released in supplementary materials or via a specific link.
Open Datasets Yes In this section, we compare our method with previous methods on four most commonly used benchmark datasets. Syn Heavy25 and Syn Light25... NTURain (Chen et al. 2018)...
Dataset Splits No The paper mentions using benchmark datasets but does not provide specific training/validation/test dataset splits (e.g., percentages, sample counts, or explicit reference to standard validation splits).
Hardware Specification Yes All the experiments are implemented on a NVIDIA RTX 3090 based on Pytorch.
Software Dependencies No The paper mentions 'Pytorch' and 'ADAM optimizer' but does not provide specific version numbers for these software components or any other libraries, which are necessary for reproducible descriptions.
Experiment Setup Yes The images are cropped into 128 128 patch size with random horizontal flipping. The total training epoch is set to be 1000. The initial learning rate is set to be 10 4, and divided by 2 every 200 epochs.