GAN Prior Based Null-Space Learning for Consistent Super-resolution
Authors: Yinhuai Wang, Yujie Hu, Jiwen Yu, Jian Zhang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that the use of PD refreshes state-of-the-art SR performance and speeds up the convergence of training up to 2 10 times. We validate PD on two typical GAN prior based SR networks: Panini (Wang, Hu, and Zhang 2022) and GLEAN (Chan et al. 2021). We experiment SR on three typical categories: human face, cat, and church. |
| Researcher Affiliation | Academia | 1Peking University Shenzhen Graduate School, China 2Peng Cheng Laboratory, China |
| Pseudocode | No | The paper describes the Pooling-based Decomposition (PD) method in detail and illustrates it with Fig. 2, but does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Codes: https://github.com/wyhuai/RND. |
| Open Datasets | Yes | To experiment 8 SR on the human face, we train Panini, GLEAN, and their PD-based version on the FFHQ dataset (Karras, Laine, and Aila 2019). For evaluation, we take 1K images from Celeb A-HQ (Karras, Laine, and Aila 2019) dataset as the ground truth (GT) Likewise, we experiment 8 SR on LSUN cat and church datasets (Yu et al. 2015) for GLEAN and Panini and their PD based version. |
| Dataset Splits | No | The paper mentions using 1K images from Celeb A-HQ for evaluation (testing) but does not provide specific details on a separate validation split for hyperparameter tuning or early stopping. |
| Hardware Specification | Yes | For 8 SR, we train Panini, Panini w/ PD , GLEAN, and GLEAN w/ PD under the same training configuration for 100K iterations, with the batch size of 4 on a single Nvidia V100 GPU. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and different loss functions, but does not provide specific version numbers for any programming languages or libraries used (e.g., PyTorch version, Python version). |
| Experiment Setup | Yes | In detail, we use the Adam optimizer and Cosine Annealing Scheme with three training objectives: ℓ1 loss, perceptual loss (Johnson, Alahi, and Fei Fei 2016), and GAN loss (Goodfellow et al. 2014). For 8 SR, the loss weights are set as 1, 1 10 2, 1 10 2 respectively. The learning rate is set as 1 10 3. ... for 100K iterations, with the batch size of 4... |