Best-Buddy GANs for Highly Detailed Image Super-resolution
Authors: Wenbo Li, Kun Zhou, Lu Qi, Liying Lu, Jiangbo Lu1412-1420
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments justify the effectiveness of our method. An ultra-high-resolution 4K dataset is also constructed to facilitate future super-resolution research. and Experiments Datasets Our network is trained on DIV2K (Agustsson and Timofte 2017) (800 images) and Flickr2K (Timofte et al. 2017) (2650 images) datasets. and We also conduct a user study for better comparison. |
| Researcher Affiliation | Collaboration | 1The Chinese University of Hong Kong 2Smartmore Technology {wenboli,luqi,lylu}@cse.cuhk.edu.hk {kun.zhou,jiangbo}@smartmore.com |
| Pseudocode | No | The paper describes its methods in narrative text and mathematical equations, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states, 'Breaking through the 2K resolution limitation of current SISR datasets, we provide an ultra-high-resolution 4K (UH4K) image dataset with diverse categories to promote future study, which will be made publicly available.' This refers to the dataset, not the source code for the methodology. No other explicit statement or link to the source code was found. |
| Open Datasets | Yes | Our network is trained on DIV2K (Agustsson and Timofte 2017) (800 images) and Flickr2K (Timofte et al. 2017) (2650 images) datasets. and Breaking through the 2K resolution limitation of current SISR datasets, we provide an ultra-high-resolution 4K (UH4K) image dataset with diverse categories to promote future study, which will be made publicly available. |
| Dataset Splits | Yes | Apart from the widely used testing benchmark including Set5 (Bevilacqua et al. 2012), Set14 (Zeyde, Elad, and Protter 2010), BSDS100 (Martin et al. 2001) and Urban100 (Huang, Singh, and Ahuja 2015), we also adopt the 100 validation images in DIV2K to evaluate the performance of our model. |
| Hardware Specification | Yes | All experiments are carried out on NVIDIA Ge Force RTX 2080 Ti GPUs under the 4 setting. |
| Software Dependencies | No | The paper mentions using 'Adam as the optimizer' and a 'pretrained VGG-19' network, but does not specify software versions for programming languages or libraries (e.g., Python version, TensorFlow/PyTorch version). |
| Experiment Setup | Yes | The mini-batch size is set to 8. We adopt Adam as the optimizer with β1 = 0.9 and β2 = 0.999. There are 3 periods in our training, each with 200K iterations. The learning rate for every period is set to 1 × 10−4 initially in accompany with a warm-up and a cosine decay. The images are augmented with random cropping, flipping and rotation. The input size is 48 × 48 and the rotation is 90◦ or −90◦. The α and β are both set to 1.0 from empirical experiments. The kernel size k and δ are set to 11 and 0.025 (for normalized images). |