Hybrid Pixel-Unshuffled Network for Lightweight Image Super-resolution
Authors: Bin Sun, Yulun Zhang, Songyao Jiang, Yun Fu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The comparison findings demonstrate that, with fewer parameters and computational costs, our HPUN achieves and surpasses the state-of-the-art performance on SISR. |
| Researcher Affiliation | Collaboration | Bin Sun1,3, Yulun Zhang2, Songyao Jiang1, Yun Fu1,3 1 Northeastern University, Boston, MA, USA 2 ETH Z urich, Z urich, Switzerland 3AInnovation Labs Inc., Boston, MA, USA |
| Pseudocode | No | The paper describes the model architecture and operations using text, mathematical equations, and diagrams, but does not include an explicit pseudocode block or algorithm section. |
| Open Source Code | Yes | All results are provided in the github https://github.com/Sun1992/HPUN. |
| Open Datasets | Yes | As training data, we use the DIV2K dataset (Timofte et al. 2017) following the pop- ular works (Han et al. 2015; Timofte et al. 2017; Lim et al. 2017; Zhang, Zuo, and Zhang 2018). We used the following testing datasets: Set5 (Bevilacqua et al. 2012), Set14 (Zeyde, Elad, and Protter 2010), B100 (Martin et al. 2001), Urban100 (Huang, Singh, and Ahuja 2015), and Manga109 (Matsui et al. 2017). |
| Dataset Splits | No | The paper mentions using DIV2K for training and specific datasets for testing, but does not explicitly state the dataset splits for training, validation, and testing (e.g., percentages or counts for a validation set). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU model, CPU type, or memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper states: "We implement our HPUN with the Py Torch (Paszke et al. 2019) and update it with Adam optimizer (Kingma and Ba 2015)." However, it does not specify exact version numbers for PyTorch or the Adam optimizer library. |
| Experiment Setup | Yes | Following the popular settings (Lim et al. 2017), we extract 16 LR RGB patches at random as inputs in each training batch. The size of each patch is 48 48. The patches are randomly augmented by flipping horizontally or vertically and rotating 90 . There are 14,200 iterations in one epoch. [...] The learning rate is initialized to 2 10 4 for all layers and follows the cosine scheduler with 250 epochs in each cycle. We finetune the model with longer epochs and larger batchsize for final comparisons. Some experiments use the step scheduler and will be emphasized in the caption for fair comparison. |