Uncertainty-Driven Loss for Single Image Super-Resolution
Authors: Qian Ning, Weisheng Dong, Xin Li, Jinjian Wu, GUANGMING Shi
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on three popular SISR networks show that our proposed uncertainty-driven loss has achieved better PSNR performance than traditional loss functions without any increased computation during testing. 4 Experiments 4.1 Experimental Settings 4.2 Ablation Study 4.5 Results with BI Degradation Model 4.6 Results with BD Degradation Model |
| Researcher Affiliation | Academia | 1School of Artificial Intelligence, Xidian University, Xi an 710071, China 2Lane Dep. of CSEE, West Virginia University, Morgantown WV 26506, USA |
| Pseudocode | No | The paper describes the methodology in text and through mathematical formulations and figures, but it does not include explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://see.xidian.edu.cn/faculty/wsdong/Projects/UDL-SR.htm |
| Open Datasets | Yes | 800 high-quality (2K resolution) images from the DIV2K dataset [29] have been used for training. |
| Dataset Splits | No | The paper states that DIV2K is 'used for training' and lists several datasets 'used for testing', but it does not specify any train/validation/test splits or cross-validation setup explicitly. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It only mentions 'Our analysis of training cost can be found in our supplementary material' without detailing hardware in the main text. |
| Software Dependencies | No | The paper mentions the ADAM algorithm for optimization but does not specify any software dependencies like programming languages or libraries with their version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We randomly select 16 RGB LR patches sized by 48 48 as the inputs. The image patches are randomly rotated by 90 , 180 , 270 and flipped horizontally. The ADAM algorithm [31] with β1 = 0.9, β2 = 0.999, ϵ = 10 8 is adopted to optimize the network. The initial learning rate is 10 4 and decreases by half for every 2 105 minibatch updates. |