Basic Binary Convolution Unit for Binarized Image Restoration Network
Authors: Bin Xia, Yulun Zhang, Yitong Wang, Yapeng Tian, Wenming Yang, Radu Timofte, Luc Van Gool
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on different IR tasks, and our BBCU significantly outperforms other BNNs and lightweight models, which shows that BBCU can serve as a basic unit for binarized IR networks. |
| Researcher Affiliation | Collaboration | Bin Xia1, Yulun Zhang2, Yitong Wang3, Yapeng Tian4, Wenming Yang1 , Radu Timofte5, and Luc Van Gool2 1Tsinghua University 2ETH Z urich 3Byte Dance Inc 4University of Texas at Dallas 5University of W urzburg |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/Zj-Bin Xia/BBCU |
| Open Datasets | Yes | We train all models on DIV2K (Agustsson & Timofte, 2017), which contains 800 high-quality images. Besides, we adopt widely used test sets for evaluation and report PSNR and SSIM. [...] In addition, we use Set5 (Bevilacqua et al., 2012), Set14 (Zeyde et al., 2010), B100 (Martin et al., 2001), Urban100 (Huang et al., 2015), and Manga109 (Matsui et al., 2017) for evaluation. |
| Dataset Splits | No | The paper specifies training parameters and datasets used for training and testing, but it does not explicitly describe a validation data split (e.g., 'X% for validation', or a specific validation set name) separate from training and test. |
| Hardware Specification | Yes | We implement our models with a Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions 'Matlab standard JPEG encoder' but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | The mini-batch contains 16 images with the size of 192 192 randomly cropped from training data. We set the initial learning rate to 1 10 4, train models with 300 epochs, and perform halving every 200 epochs. [...] The amplification factor k in the residual alignment is set to 130. |