Feature Distillation Interaction Weighting Network for Lightweight Image Super-resolution
Authors: Guangwei Gao, Wenjie Li, Juncheng Li, Fei Wu, Huimin Lu, Yi Yu661-669
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our FDIWN is superior to other models to strike a good balance between model performance and efficiency. Experiments Datasets and Evaluation Metrics Following previous works, we use the DIV2K (Agustsson and Timofte 2017) as the training dataset, which contains 800 pairs of images. For testing, we use Set5 (Bevilacqua et al. 2012), Set14 (Zeyde, Elad, and Protter 2010), BSDS100 (Martin et al. 2001), and Urban100 (Huang, Singh, and Ahuja 2015) to verify the effectiveness of the proposed FDIWN. Meanwhile, two metrics on the Y channel in the YCb Cr color space, namely PSNR and SSIM are used to evaluate the model performance. |
| Researcher Affiliation | Academia | Guangwei Gao1 , Wenjie Li1 , Juncheng Li2*, Fei Wu1, Huimin Lu3, Yi Yu4 1 Nanjing University of Posts and Telecommunications 2 The Chinese University of Hong Kong 3 Kyushu Institute of Technology 4 National Institute of Informatics |
| Pseudocode | No | The paper includes diagrams of the network architecture and module structures, but no explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/IVIPLab/FDIWN. |
| Open Datasets | Yes | Following previous works, we use the DIV2K (Agustsson and Timofte 2017) as the training dataset, which contains 800 pairs of images. For testing, we use Set5 (Bevilacqua et al. 2012), Set14 (Zeyde, Elad, and Protter 2010), BSDS100 (Martin et al. 2001), and Urban100 (Huang, Singh, and Ahuja 2015) to verify the effectiveness of the proposed FDIWN. |
| Dataset Splits | No | The paper states: 'Following previous works, we use the DIV2K (Agustsson and Timofte 2017) as the training dataset...' and 'For testing, we use Set5 (Bevilacqua et al. 2012), Set14 (Zeyde, Elad, and Protter 2010), BSDS100 (Martin et al. 2001), and Urban100 (Huang, Singh, and Ahuja 2015)'. While it mentions training and testing, it does not explicitly specify a validation dataset or split, nor does it detail how hyperparameters were tuned if a separate validation set was used. |
| Hardware Specification | Yes | All our experiments are performed on NVIDIA RX 2080TI GPUs. |
| Software Dependencies | No | The paper states: 'We implement our model with the Py Torch framework and update it with Adam optimizer.' However, it does not provide specific version numbers for PyTorch or any other software libraries, which is necessary for full reproducibility. |
| Experiment Setup | Yes | Each mini-batch during the training consists of 16 RGB image blocks with the size of 48 48, which are randomly cropped from the LR image. Meanwhile, the training dataset is enhanced by random and horizontal rotation at different angles for data augmentation. The learning rate is initialized to 2e-4 and a total of 1000 epochs are updated. We implement our model with the Py Torch framework and update it with Adam optimizer. All our experiments are performed on NVIDIA RX 2080TI GPUs. As for the model set, the final version of FDIWN consists of 6 FSWGs, while the tiny version of FDIWN-M only consists of 4 FSWGs. The number of input channels is initialized to 24 and the value of the adaptive weight is 1. |