From Coarse to Fine: Hierarchical Pixel Integration for Lightweight Image Super-resolution
Authors: Jie Liu, Chao Chen, Jie Tang, Gangshan Wu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments suggest that our method outperforms state-of-the-art lightweight SR methods by a large margin. We compare our HPINet-S/M/L with other lightweight SR models of various sizes, including MAFFSRN (Muqeet et al. 2020), RFDN (Liu, Tang, and Wu 2020), LAPAR-A (Li et al. 2020), IMDN (Hui et al. 2019), Lattice Net (Luo et al. 2020), Swin IR-light (Liang et al. 2021), A2F-L (Wang et al. 2020) and A-cube Net (Hang et al. 2020). |
| Researcher Affiliation | Academia | State Key Laboratory for Novel Software Technology, Nanjing University, China liujie@nju.edu.cn, chenchao@smail.nju.edu.cn, {tangjie,gswu}@nju.edu.cn |
| Pseudocode | No | The paper describes the architecture and modules using text and diagrams (Fig. 3, 4, 5) but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/passerer/HPINet. |
| Open Datasets | Yes | The model is trained with a high-quality dataset DIV2K (Agustsson and Timofte 2017), which is widely used for image SR task. It includes 800 training images together with 100 validation images. Besides, we evaluate our model on five public SR benchmark datasets: Set5 (Bevilacqua et al. 2012), Set14 (Zeyde, Elad, and Protter 2010), B100 (Martin et al. 2001), Urban100 (Huang, Singh, and Ahuja 2015) and Manga109 (Matsui et al. 2017). |
| Dataset Splits | Yes | The model is trained with a high-quality dataset DIV2K (Agustsson and Timofte 2017), which is widely used for image SR task. It includes 800 training images together with 100 validation images. |
| Hardware Specification | Yes | The whole process is implemented by Pytorch on NVIDIA Tesla V100 GPUs. |
| Software Dependencies | No | The paper mentions 'Pytorch' but does not provide a specific version number, nor does it list other software dependencies with version numbers. |
| Experiment Setup | Yes | The cropped HR image size is initialized as 196 196 and increases to 896 896 epoch by epoch, and batch size is set as 6. Training images are augmented by random flipping and rotation. All models are trained using Adam algorithm with L1 loss. The learning rate is initialized as 3 10 4 and halved per 200 epochs. For the proposed HPINet, the number of blocks is set as 8 and the corresponding patch size is set as {12, 16, 20, 24, 12, 16, 20, 24}. |