Flexible Residual Binarization for Image Super-Resolution
Authors: Yulun Zhang, Haotong Qin, Zixiang Zhao, Xianglong Liu, Martin Danelljan, Fisher Yu
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments and comparisons with recent leading binarization methods. Our proposed baselines, FRBC and FRBT, achieve superior performance both quantitatively and visually. 4. Experiments |
| Researcher Affiliation | Academia | 1Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China 2ETH Z urich, Switzerland 3Beihang University, China. |
| Pseudocode | Yes | Algorithm 1 Flexible Residual Binarization for Image SR |
| Open Source Code | No | The paper does not provide any explicit statements about releasing code or links to a code repository for the methodology described. |
| Open Datasets | Yes | Following the common practice (Lim et al., 2017; Zhang et al., 2018a), we adopt DIV2K (Timofte et al., 2017) as the training data. |
| Dataset Splits | No | The paper mentions DIV2K as training data and five benchmark datasets for testing, but does not explicitly describe a validation dataset split or how training data is partitioned for validation. |
| Hardware Specification | Yes | Py Torch (Paszke et al., 2017) is employed to conduct all experiments with NVIDIA RTX A6000 GPUs. |
| Software Dependencies | No | The paper mentions 'Py Torch (Paszke et al., 2017)' but does not provide specific version numbers for PyTorch or other software dependencies. |
| Experiment Setup | Yes | In the training phase, same as previous work (Lim et al., 2017; Zhang et al., 2018a; Xin et al., 2020; Liang et al., 2021), we conduct data augmentation (random rotation by 90 , 180 , 270 and horizontal flip). We train the model for 300K iterations. Each training batch extracts 32 image patches, whose size is 64 64. We utilize Adam optimizer (Kingma & Ba, 2015) (β1=0.9, β2=0.999, and ϵ=10 8) during training. The initial learning rate 2 10 4, which is reduced by half at the 250K-th iteration. |