Rethinking Imbalance in Image Super-Resolution for Efficient Inference
Authors: Wei Yu, Bowen Yang, Liu Qinglin, Jianing Li, Shengping Zhang, Xiangyang Ji
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments across various models, datasets, and scale factors demonstrate that our method achieves comparable or superior performance to existing approaches with approximately a 34% reduction in computational cost. |
| Researcher Affiliation | Academia | 1 School of Computer Science and Technology, Harbin Institute of Technology 2 School of Information Science and Technology, Tsinghua University |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/aipixel/WBSR. |
| Open Datasets | Yes | we apply DIV2K [1] as the training dataset widely used for image SR, which includes 800 high-quality images with diverse contents and texture details. |
| Dataset Splits | No | The paper mentions training and testing datasets but does not explicitly describe a separate validation split or how hyperparameters were tuned using one. |
| Hardware Specification | Yes | All methods are implemented using Py Torch and trained on an NVIDIA Ge Force RTX 3090 |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify a version number for this or any other software dependency. |
| Experiment Setup | Yes | All methods are implemented using Py Torch and trained on an NVIDIA Ge Force RTX 3090 for 100 epochs with 16 batch sizes, where the first 70 epochs are sample-level sampling and the rest are class-level sampling. The training patch size is set to 128 128 and augmented by horizontal and vertical flipping to enhance its robustness. We utilize our Lbd loss along with the Adam optimizer [22], setting β1 = 0.9 and β2 = 0.999. To adjust the learning rate, we apply a cosine annealing learning strategy, starting with an initial learning rate of 2 10 4 and decaying to 10 7. |