DySR: Adaptive Super-Resolution via Algorithm and System Co-design
Authors: Syed Zawad, Cheng Li, Zhewei Yao, Elton Zheng, Yuxiong He, Feng Yan
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate on a diverse set of hardware and datasets to show that Dy SR can generate models close to the Pareto frontier while maintaining a steady frame rate throughput with a memory footprint of around 40% less compared to the assembled baseline methods. |
| Researcher Affiliation | Collaboration | 1University of Nevada, Reno, 2Microsoft Research, 3University of Houston szawad@nevada.unr.edu, {chengli1,zheweiyao,elton.zheng,yuxhe}@microsoft.com, fyan5@central.uh.edu |
| Pseudocode | Yes | Algorithm 1 Model Selection Policy |
| Open Source Code | Yes | 1https://github.com/syed-zawad/srnas |
| Open Datasets | Yes | For training the searched models till convergence, we use Div2K training dataset with 64x64 patch size, batch size of 16 and learning rate of 0.0043. For the video super-resolution dataset, we train with Vimeo90k (Xue et al. (2019)). |
| Dataset Splits | No | The paper mentions training and testing datasets but does not explicitly provide details for a validation split. |
| Hardware Specification | Yes | We implement Dy SR using Py Torch and perform the search and training using 4 A100 GPUs, taking 21 GPU days to complete per run. During the deployment, we perform a quick profiling (e.g., a few steps of forward passes) on the target device (e.g. Snapdragon 855) to measure its inference latency and create a profiled database. [...] We use a mobile CPU Snapdragon 855, a laptop CPU Intel i5-560M, a desktop grade GPU 1080Ti and a server-grade GPU A100 (Table 5). |
| Software Dependencies | No | The paper mentions "Py Torch" but does not specify a version number or other software dependencies with versioning. |
| Experiment Setup | Yes | For training the searched models till convergence, we use Div2K training dataset with 64x64 patch size, batch size of 16 and learning rate of 0.0043. For the search parameters, we use the values 15, 2 and 5 for Nl, Nb and B respectively. |