Learning a Spiking Neural Network for Efficient Image Deraining
Authors: Tianyu Song, Guiyue Jin, Pengpeng Li, Kui Jiang, Xiang Chen, Jiyu Jin
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct comprehensive experiments on commonly used benchmark datasets to evaluate the effectiveness of the proposed method. |
| Researcher Affiliation | Collaboration | 1Dalian Polytechnic University 2Nanjing University of Science and Technology 3Harbin Institute of Technology |
| Pseudocode | No | The information is insufficient. The paper provides architectural diagrams but no structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code source is available at https://github.com/Ming Tian99/ESDNet. |
| Open Datasets | Yes | We retrained all models on the four publicly available datasets (Rain12 [Li et al., 2016], Rain200L [Yang et al., 2017], Rain200H [Yang et al., 2017], Rain1200 [Zhang and Patel, 2018]) to ensure a fair comparison of all methods. |
| Dataset Splits | No | The information is insufficient. While the paper specifies training and test data sizes for datasets like Rain200L/H ('1800 synthetic rain images for training, along with 200 images designated for testing'), it does not explicitly mention or provide details for a separate validation split used in its experiments. |
| Hardware Specification | Yes | All experiments are executed on an NVIDIA Ge Force RTX 3080Ti GPU (12G). |
| Software Dependencies | No | The information is insufficient. The paper mentions the 'Py Torch framework' but does not specify its version or the versions of any other software dependencies required to reproduce the experiments. |
| Experiment Setup | Yes | During the training process, we conduct the proposed network in the Py Torch framework with an Adam optimizer and a batch size of 12. We set the learning rate to 1 10 3 and apply the cosine annealing strategy [Song et al., 2023a] to steadily decrease the final learning rate to 1 10 7. For Rain200L, Rain200H, and Rain1200 datasets, we train the model by 1000 epochs. We set the stacking numbers of SRB to [4,4,8] in the encoder stage and [2,2] in the decoder stage. For the α of the gradient proxy function, it is set to 4 according to [Su et al., 2023]. |