Learning Controllable Degradation for Real-World Super-Resolution via Constrained Flows
Authors: Seobin Park, Dongjin Kim, Sungyong Baik, Tae Hyun Kim
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our quantitative and qualitative experiments demonstrate the accuracy of the generated LR images, and we show that the various conventional SR networks trained with our newly generated SR datasets can produce much better HR images.In this section, we elaborate on implementation details and measure the quantitative and qualitative results of the generated LR images, and show the elevation of SR performance with the aid of our synthetic LR images. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA 2Department of Computer Science, Hanyang University, Seoul, Korea 3Department of Data Science, Hanyang University, Seoul, Korea. |
| Pseudocode | No | The paper provides mathematical formulations but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | We will release our source code and dataset upon acceptance. |
| Open Datasets | Yes | To train and evaluate the proposed method, we use the real-world SR dataset (Real SR ver.2). Specifically, Real SR (Cai et al., 2019) dataset was collected by taking images of the same scene with different focal lengths and by aligning them through optimization. |
| Dataset Splits | No | The paper mentions using 'Canon train-dataset' and 'Nikon dataset' for training and evaluation respectively, but does not explicitly provide details about a distinct validation dataset split with percentages, counts, or methodology. |
| Hardware Specification | Yes | It takes 1.5 days to train with an NVIDIA V100 GPU and takes 0.35 seconds to generate a single LR image which has a resolution of 160 160. |
| Software Dependencies | No | The paper mentions the Adam optimizer and refers to various model components and architectures, but it does not specify any software libraries or their version numbers, such as Python versions, TensorFlow/PyTorch versions, or other specific dependencies. |
| Experiment Setup | Yes | Our Inter Flow is trained by minimizing the loss in (10) using the Adam optimizer (Kingma & Ba, 2015) with 160 160 train-patches (batch size = 8) for 100k iterations. The learning rate is initially set to 10 4 and reduced by half at 50k, 75k, and 90k iterations. Moreover, downscaling factor in the proposed LR-consistency loss in (8) is set to 4, and we use λLR-cons = 10 and λIB = 1 which are determined empirically. |