AND: Adversarial Neural Degradation for Learning Blind Image Super-Resolution
Authors: Fangzhou Luo, Xiaolin Wu, Yanhui Guo
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments on four datasets constructed for real-world SR evaluation: Real SR [3], DReal SR [46], Sup ER [21], and Image Pairs [18]. Quantitative results are shown in Table 1, and visual comparison between different methods are shown in Fig. 3. In order to study the effects of each component in the proposed blind SR method, we gradually modify the AND model and compare their quantitative performances. The comparisons are shown in Table 2. |
| Researcher Affiliation | Academia | Fangzhou Luo Mc Master University luof1@mcmaster.ca, Xiaolin Wu Mc Master University xwu@ece.mcmaster.ca, Yanhui Guo Mc Master University guoy143@mcmaster.ca |
| Pseudocode | No | The paper describes the training procedure in sequential steps but does not include a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing the source code for the described methodology or a direct link to a code repository. |
| Open Datasets | Yes | We use DIV2K [39], Flickr2K [27] and WED [30] as HR image datasets for training. |
| Dataset Splits | No | The paper mentions using DIV2K, Flickr2K, and WED for training and other datasets for evaluation, but does not provide specific percentages or sample counts for training, validation, or test splits, nor does it refer to predefined splits for these datasets. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions using the Adam optimizer and ESRGAN model, but does not provide specific version numbers for any software libraries, frameworks, or solvers used in the experiments. |
| Experiment Setup | Yes | The training HR patch size is set to 256 and the batch size is set to 48. First, we train ANDNet with the L1 loss only, for 1 × 10^6 iterations with 1 × 10^-4 learning rate. ... train the whole ANDGAN model with both the content loss and the GAN loss in Equation 1, which are balanced by λ = 0.1, for 5 × 10^5 iterations with 1 × 10^-4 learning rate. We use Adam optimizer [20] for both generator and discriminator training. ... run 5 iterations of projected gradient descent with step size of 6 and perturbation size ε = 20. ... use scale factors of 1, 10, 50 for term of convolutional degradation, noise and nonlinearity respectively. |