ResDiff: Combining CNN and Diffusion Model for Image Super-resolution
Authors: Shuyao Shang, Zhengyang Shan, Guangxing Liu, LunQian Wang, XingHua Wang, Zekai Zhang, Jinglin Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The extensive experiments on multiple benchmark datasets demonstrate that Res Diff outperforms previous diffusion-based methods in terms of shorter model convergence time, superior generation quality, and more diverse samples. |
| Researcher Affiliation | Academia | Shuyao Shang 1, Zhengyang Shan 1, Guangxing Liu 1, Lun Qian Wang 2, Xing Hua Wang 2, Zekai Zhang 3, Jinglin Zhang 1 1 Shandong University 2 Linyi University 3 Qilu University of Technology |
| Pseudocode | Yes | Algorithm 1: Res Diff Inference |
| Open Source Code | No | The paper does not provide any concrete links or explicit statements about the release of its source code. |
| Open Datasets | Yes | Experiments on two face datasets (FFHQ and Celeb A) and two general datasets (Div2k and Urban100) demonstrate that Res Diff not only accelerates the model s convergence speed but also generates more fine-grained images. |
| Dataset Splits | Yes | Our Res Diff is trained solely on the provided training data to guarantee a fair comparison. |
| Hardware Specification | No | The paper mentions 'Due to equipment limitations' in the conclusion but does not specify the hardware (GPU/CPU models, memory, etc.) used for running the experiments. |
| Software Dependencies | No | The paper does not explicitly list specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | No | The supplementary material contains detailed information about the training process, hyperparameters, and other relevant details. |