QuantSR: Accurate Low-bit Quantization for Efficient Image Super-Resolution
Authors: Haotong Qin, Yulun Zhang, Yifu Ding, Yifan liu, Xianglong Liu, Martin Danelljan, Fisher Yu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our comprehensive experiments show that Quant SR outperforms existing state-of-the-art quantized SR networks in terms of accuracy while also providing more competitive computational efficiency. |
| Researcher Affiliation | Academia | 1Beihang University 2ETH Zürich |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and models are released at https://github.com/htqin/Quant SR. |
| Open Datasets | Yes | Dataset. We adhere to the standard procedure in image SR, training on DIV2K [32] and evaluating on Set5 [2], Set14 [37], B100 [29], Urban100 [13], and Manga109 [30]. |
| Dataset Splits | No | The paper mentions training on DIV2K and evaluating on Set5, Set14, B100, Urban100, and Manga109, but it does not explicitly describe a train/validation/test split for any single dataset, nor does it specify a dedicated validation set used during training beyond the evaluation sets. |
| Hardware Specification | Yes | All experiments are conducted on NVIDIA RTX A6000 GPUs with PyTorch [31]. |
| Software Dependencies | No | The paper states 'All experiments are conducted on NVIDIA RTX A6000 GPUs with PyTorch [31]', but it does not specify the version number of PyTorch or any other software dependencies. |
| Experiment Setup | Yes | Training Strategy. In our training process, we follow the practices of previous studies [26, 40, 36, 25] by conducting data augmentation, which involves random rotations of 90°, 180°, 270°, and horizontal flipping. The models are trained for 300K iterations, with each training batch consisting of 32 image patches. The input size of each patch is 64x64. To optimize our model, we utilize the Adam optimizer [19]. The learning rate is initially set to 2e-4 and is then halved at the 250K-th iteration. |