UPS: Unified Projection Sharing for Lightweight Single-Image Super-resolution and Beyond
Authors: Kun Zhou, Xinyu Lin, Zhonghang LIU, Xiaoguang Han, Jiangbo Lu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our proposed UPS achieves state-of-the-art performance relative to leading lightweight SISR methods, as veriļ¬ed by various popular benchmarks. |
| Researcher Affiliation | Collaboration | Kun Zhou1,2 , Xinyu Lin1,2 , Zhonghang Liu3, Xiaoguang Han1 , Jiangbo Lu2 1SSE, CUHK-Shenzhen, 2Smart More Corporation 3SMU, Singapore hanxiaoguang@cuhk.edu.cn, jiangbo.lu@gmail.com |
| Pseudocode | Yes | Algorithm 1 Pseudo Code of the i-th STL |
| Open Source Code | No | Code will be made publicly available at https://github.com/redrock303/UPS-Neur IPS2024. |
| Open Datasets | Yes | Following previous studies [7, 28, 18], we utilize the DIV2K [37] image dataset for training. |
| Dataset Splits | No | Following previous studies [7, 28, 18], we utilize the DIV2K [37] image dataset for training. Subsequently, we conduct comprehensive evaluations on several widely-used SISR benchmarks, including Set5 [38], Set14 [39], BSD100 [11], Urban100 [40], and Manga109 [41]. |
| Hardware Specification | Yes | Training is conducted for 600K iterations, utilizing four NVIDIA RTX 3090 GPUs. |
| Software Dependencies | No | Our UPS model is developed by PyTorch and incorporates several commonly used data augmentation techniques... |
| Experiment Setup | Yes | During training, we employ the Adam [35] optimization with cosine annealing [36], starting with an initial learning rate of 4e 4. We set the batch size as 32 and the input image size as 64 64. Training is conducted for 600K iterations, utilizing four NVIDIA RTX 3090 GPUs. |