Rain Streak Removal via Dual Graph Convolutional Network
Authors: Xueyang Fu, Qi Qi, Zheng-Jun Zha, Yurui Zhu, Xinghao Ding1352-1360
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on synthetic and real data demonstrate that our method achieves significant improvements over the recent state-of-the-art methods. |
| Researcher Affiliation | Academia | Xueyang Fu1 , Qi Qi2 , Zheng-Jun Zha1 , Yurui Zhu1, Xinghao Ding2 1University of Science and Technology of China, China 2Xiamen University, China |
| Pseudocode | No | The paper describes the network architecture and mathematical operations but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing open-source code or links to a code repository. |
| Open Datasets | Yes | We use five representative synthetic data sets provided by GMM (Li et al. 2016), JORDER-E (Yang et al. 2019), DDN (Fu et al. 2017b) and DID-MDN (Zhang and Patel 2018b), respectively. ... we conduct experiments on the recent public real-world rainy data set SPA-Data (Wang et al. 2019), which contains nearly 0.64 million rainy/clean image pairs for training and 1000 pairs for testing. |
| Dataset Splits | No | The paper mentions training and testing data but does not explicitly define or refer to a separate 'validation' dataset or split for model tuning during training. |
| Hardware Specification | No | The paper mentions running experiments on 'CPU' and 'GPU' for runtime comparisons (Table 2), but does not provide specific hardware details such as GPU models, CPU models, or memory specifications. |
| Software Dependencies | No | The paper mentions using 'Tensor Flow (Abadi et al. 2016) and Adam (Kingma and Ba 2014)' but does not specify their version numbers or any other software dependencies with version details. |
| Experiment Setup | Yes | We set the sizes of the kernels in fusion operations and GCN modules to 1 1 and the rest to 3 3. The number of feature maps is 72 for all convolutions. The non-linear activation is Re LU (Krizhevsky, Sutskever, and Hinton 2012) and used in the dilated convolutional module. We use Tensor Flow (Abadi et al. 2016) and Adam (Kingma and Ba 2014) with a minibatch size of 10 to train our network. The training images are cropped into 100 100 patch pairs with horizontal flipping for data augmentation. We fix the learning rate to 0.0001 and terminate training after 300 epochs. |