Unfolding Taylor's Approximations for Image Restoration

Authors: man zhou, Xueyang Fu, Zeyu Xiao, Gang Yang, Aiping Liu, Zhiwei Xiong

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments To demonstrate the effectiveness and scalability of our proposed framework, we conduct experiments over two image restoration tasks, i.e., image deraining and image deblurring.
Researcher Affiliation Academia 1University of Science and Technology of China {manman}@mail.ustc.edu.cn xyfu@ustc.edu.cn
Pseudocode No Figure 1: The overview structure of Deep Taylor s Approximations Framework.
Open Source Code No Code will be publicly available upon acceptance.
Open Datasets Yes Regarding experiment rainy datasets, the first three methods including DDN, DCM and PRe Net are trained and evaluated over three widelyused standard benchmark datasets, including Rain100H, Rain100L, Rain800. ...For image deblurring task, typical methods like DCM [14], MSCNN [30] and RDN [66, 65] are adopted in our experiments. As in [62, 45, 26], we use the Go Pro [30] dataset that contains 2,103 image pairs for training and 1,111 pairs for evaluation. Furthermore, to demonstrate generalizability, we take the model trained on the Go Pro dataset and directly apply it on the test images of the HIDE [41] and Real Blur [39] datasets.
Dataset Splits No For image deraining task... Rain100H, a heavy rainy dataset are composed with 1,800 rainy images for training and 100 rainy samples for testing. ...Rain100L dataset contains 200 training samples and 100 testing images... the Rain800 is proposed in [58], which includes 700 training and 100 testing images. ...For image deblurring task... we use the Go Pro [30] dataset that contains 2,103 image pairs for training and 1,111 pairs for evaluation.
Hardware Specification Yes One NVIDIA GTX 2080Ti GPU is used for training.
Software Dependencies No The ADAM algorithm is adopted to train the models with an initial learning rate 1 10 3, and ends after 100 epochs.
Experiment Setup Yes The patch size is 100 100, and the batch size is 4. The ADAM algorithm is adopted to train the models with an initial learning rate 1 10 3, and ends after 100 epochs. When reaching 30, 50 and 80 epochs, the learning rate is decayed by multiplying 0.2 and λ is set as 1.0 in loss function.