Binarized Diffusion Model for Image Super-Resolution
Authors: Zheng Chen, Haotong Qin, Yong Guo, Xiongfei Su, Xin Yuan, Linghe Kong, Yulun Zhang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments demonstrate that our BI-Diff SR outperforms existing binarization methods. |
| Researcher Affiliation | Academia | 1Shanghai Jiao Tong University, 2ETH Zürich, 3Max Planck Institute for Informatics, 4Westlake University |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is released at: https://github.com/zhengchen1999/BI-Diff SR. |
| Open Datasets | Yes | We take DIV2K [59] and Flickr2K [33] as the training dataset. |
| Dataset Splits | No | The paper states training and testing datasets (DIV2K, Flickr2K for training; Manga109 for testing in ablation study), but does not explicitly define a separate validation dataset split with proportions or sample counts. |
| Hardware Specification | Yes | Our model is implemented based on Py Torch [47] with two Nvidia A100-80G GPUs. |
| Software Dependencies | No | The paper mentions PyTorch as the implementation framework but does not specify its version or any other software dependencies with version numbers. |
| Experiment Setup | Yes | For the noise estimation network, we set the encoder and decoder level to 4. ... We train models with the L1 loss. We employ the Adam optimizer [22] with β1=0.9 and β2=0.99, and a learning rate of 1 10 4. The batch size is set to 16, with a total of 1,000K iterations. |