Direction-aware Feature-level Frequency Decomposition for Single Image Deraining

Authors: Sen Deng, Yidan Feng, Mingqiang Wei, Haoran Xie, Yiping Chen, Jonathan Li, Xiao-Ping Zhang, Jing Qin

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively evaluate the proposed approach in three representative datasets and experimental results corroborate our approach consistently outperforms state-of-the-art deraining algorithms. In this section, we evaluate our method on three synthetic datasets: Rain200L, Rain200H [Yang et al., 2017] and Rain800 [Zhang et al., 2019].
Researcher Affiliation Academia 1Nanjing University of Aeronautics and Astronautics, Nanjing, China 2Lingnan University, Hong Kong, China 3Xiamen Univeristy, Xiamen, China 4Ryerson University, Toronto, Canada 5Hong Kong Polytechnic University, Hong Kong, China
Pseudocode No The paper describes the architecture and components verbally and visually (Figure 2) but does not include any pseudocode or algorithm blocks.
Open Source Code No There is no explicit statement about releasing the code for the described method, nor any links to a code repository.
Open Datasets Yes In this section, we evaluate our method on three synthetic datasets: Rain200L, Rain200H [Yang et al., 2017] and Rain800 [Zhang et al., 2019].
Dataset Splits No The paper mentions using 'three synthetic datasets' but does not specify the train/validation/test splits, percentages, or absolute counts for any of them. It refers to 'evaluation' but not explicit split details.
Hardware Specification No The paper does not mention any specific hardware (GPU, CPU models, etc.) used for running the experiments.
Software Dependencies No The paper does not list any specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup No The paper discusses the network architecture and loss functions but does not provide specific training hyperparameters such as learning rate, batch size, number of epochs, or optimizer details.