Uncertainty-Aware GAN for Single Image Super Resolution
Authors: Chenxi Ma
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness of our approach in extracting the SR uncertainty and the superiority of the UGAN over the state-of-the-art in terms of reconstruction accuracy and perceptual quality. |
| Researcher Affiliation | Academia | Fudan University, School of Computer Science and Technology, Shanghai, China cxma@fudan.edu.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | By following the GAN-based SR methods (Wang et al. 2018; Zhang et al. 2019; Liang, Zeng, and Zhang 2022; Li et al. 2022; Park, Moon, and Cho 2023), we train our models on the DF2K datasets, which is composed of DIV2K (Timofte et al. 2017b) (800 images) and Flickr2K (Timofte et al. 2017a) (2650 images). |
| Dataset Splits | No | The paper states the datasets used for training and evaluation but does not provide specific details on how the training dataset (DF2K) was split for training, validation, and testing, or specify percentages or counts for these splits. |
| Hardware Specification | Yes | All experiments are conducted on a machine with a Nvidia GTX2080Ti GPU (128G RAM). |
| Software Dependencies | No | The paper mentions "All models are based on the Tensor Flow (Abadi et al. 2015) implementation" and are "optimized by Adam (P. and Ba 2015)" but does not specify version numbers for TensorFlow or other software libraries, which is required for reproducibility. |
| Experiment Setup | Yes | Each mini-batch contains 32 HR image patches of size 120 120. All models are based on the Tensor Flow (Abadi et al. 2015) implementation and optimized by Adam (P. and Ba 2015) with β1 = 0.9, β2 = 0.999. The learning rate is 1e-4. |