DANet: Image Deraining via Dynamic Association Learning
Authors: Kui Jiang, Zhongyuan Wang, Zheng Wang, Peng Yi, Junjun Jiang, Jinsheng Xiao, Chia-Wen Lin
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To validate our proposed DANet, we conduct extensive experiments on synthetic and real-world rainy datasets, and compare DANet with eleven image deraining methods. |
| Researcher Affiliation | Academia | 1NERCMS, School of Computer Science, Wuhan University 2School of Computer Science and Technology, Harbin Institute of Technology 3School of Electronic Information, Wuhan University 4 National Tsing Hua University |
| Pseudocode | No | The paper describes its proposed network architecture and components through text and diagrams, but it does not include pseudocode or explicitly labeled algorithm blocks. |
| Open Source Code | No | The paper mentions using publicly released codes for comparison methods but does not provide any statement or link regarding the open-sourcing of its own code. |
| Open Datasets | Yes | we use 13, 700 clean/rain image pairs from [Zhang et al., 2020; Fu et al., 2017] for training all comparison methods with their publicly released codes by tuning the optimal settings for a fair comparison. |
| Dataset Splits | No | The paper specifies training and testing datasets, but it does not explicitly describe a separate validation dataset split or a cross-validation strategy. |
| Hardware Specification | Yes | We use Adam optimizer with the learning rate (4 x 10^-4 with the decay rate of 0.8 at every 80 epochs till 500 epochs) and batch size (16) to train our DANet on a single NVIDIA Titan Xp GPU. |
| Software Dependencies | No | The paper mentions the use of 'Adam optimizer' but does not specify software dependencies like programming languages or libraries with version numbers. |
| Experiment Setup | Yes | In our baseline, the number of RCAB is empirically set to 2 for each stage in the encoder-decoder branch and 5 for the original resolution branch with filter numbers of 48. The training images are coarsely cropped into small patches with a fixed size of 128x128 pixels to obtain the training samples. We use Adam optimizer with the learning rate (4 x 10^-4 with the decay rate of 0.8 at every 80 epochs till 500 epochs) and batch size (16) to train our DANet on a single NVIDIA Titan Xp GPU. |