Image Formation Model Guided Deep Image Super-Resolution

Authors: Jinshan Pan, Yang Liu, Deqing Sun, Jimmy Ren, Ming-Ming Cheng, Jian Yang, Jinhui Tang11807-11814

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results show that the proposed algorithm performs favorably against state-of-the-art methods.
Researcher Affiliation Collaboration 1Nanjing University of Science and Technology, 2Dalian University of Technology, 3Google, 4Sense Time Research, 5Nankai University
Pseudocode No The paper describes the algorithm in text and with a diagram, but does not include pseudocode or an algorithm block.
Open Source Code Yes The code and trained models are publicly available on the authors websites.
Open Datasets Yes to generate LR images using bicubic downsampling from the DIV2K dataset (Timofte et al. 2017) for training and use the Set5 (Bevilacqua et al. 2012) as the validation test set.
Dataset Splits Yes and use the Set5 (Bevilacqua et al. 2012) as the validation test set.
Hardware Specification Yes The testing environment is on a machine with an Intel Core i7-7700 CPU and an NVIDIA GTX 1080Ti GPU.
Software Dependencies No The paper mentions 'Py Torch' but does not specify a version number or other software dependencies with version details.
Experiment Setup Yes In the learning process, we use the ADAM optimizer (Kingma and Ba 2014) with parameters β1 = 0.9, β2 = 0.999, and ϵ = 10 8. The minibatch size is set to be 1. The learning rate is initialized to be 10 4. We use a Gaussian kernel in (3) with the same settings used in (Shan et al. 2008). We empirically set T = 3 as a trade-off between accuracy and speed.