Image Restoration with Mean-Reverting Stochastic Differential Equations
Authors: Ziwei Luo, Fredrik K. Gustafsson, Zheng Zhao, Jens Sjölund, Thomas B. Schön
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments show that our proposed method achieves highly competitive performance in quantitative comparisons on image deraining, deblurring, and denoising, setting a new state-of-the-art on two deraining datasets. (...) We experimentally evaluate our proposed IR-SDE method on three popular image restoration tasks: image deraining, deblurring and denoising. We compare IR-SDE to the prevailing approaches in their respective fields. |
| Researcher Affiliation | Academia | Ziwei Luo 1 Fredrik K. Gustafsson 1 Zheng Zhao 1 Jens Sj olund 1 Thomas B. Sch on 1 1Department of Information Technology, Uppsala University, Sweden. Correspondence to: Ziwei Luo <ziwei.luo@it.uu.se>. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/ Algolzw/image-restoration-sde. |
| Open Datasets | Yes | We evaluate IR-SDE on two synthetic raining datasets: Rain100H (Yang et al., 2017) and Rain100L (Yang et al., 2017). (...) We evaluate the deblurring performance of IR-SDE on the public Go Pro dataset (Nah et al., 2017) (...) To evaluate the image denoising performance, we train our models on 8 294 high-quality images collected from the DIV2K (Agustsson & Timofte, 2017), Flickr2K (Timofte et al., 2017), BSD500 (Arbelaez et al., 2010), and Waterloo Exploration datasets (Ma et al., 2016). (...) Our IR-SDE is trained and evaluated on the DIV2K (Agustsson & Timofte, 2017) dataset. (...) We select the Celeb AHQ (Karras et al., 2018) dataset to train and test the IR-SDE on this task. (...) We train the IR-SDE on the RESIDE (Li et al., 2018) Indoor Training Set (ITS) and test it on the Synthetic Objective Testing Set (SOTS). |
| Dataset Splits | No | The paper mentions specific training and testing set sizes for datasets like Rain100H/L (e.g., "1 800 pairs ... for training, and 100 pairs for testing") and Go Pro ("2 103 image pairs for training and 1 111 image pairs for testing"). For DIV2K, it states "Figure 5 shows the qualitative results on the DIV2K validation dataset" but does not provide quantitative details about this validation split or how it was used in training for reproducibility purposes across all experiments. |
| Hardware Specification | Yes | All of our models are trained on an A100 GPU with 40GB memory for about 1.5 days (400 000 iterations). |
| Software Dependencies | No | The paper mentions using the Adam optimizer but does not specify version numbers for any software dependencies like programming languages (e.g., Python), deep learning frameworks (e.g., PyTorch, TensorFlow), or specific libraries. |
| Experiment Setup | Yes | For most tasks, we set the training patch-size to be 128 128 and use a batch size of 16. We use Adam (Kingma & Ba, 2014) optimizer with parameters β1 = 0.9 and β2 = 0.99. The total training steps are fixed to 500 thousand and the initial learning rate set to 10 4 and decays half per 200 thousand iterations. The stationary variance λ2 is set to 10 (over 255) and we use only 100 steps for all experiments. |