Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Direct Diffusion Bridge using Data Consistency for Inverse Problems
Authors: Hyungjin Chung, Jeongsol Kim, Jong Chul Ye
NeurIPS 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments 4.1 Model, Dataset 4.2 Results |
| Researcher Affiliation | Academia | 1 Dept. of Bio and Brain Engineering 2Graduate School of AI Korea Advanced Institute of Science and Technology (KAIST) EMAIL |
| Pseudocode | Yes | Algorithm 1 CDDB Algorithm 2 CDDB (deep) |
| Open Source Code | Yes | Code is open-sourced at https://github.com/HJ-harry/CDDB |
| Open Datasets | Yes | All experiments are based on Image Net 256 256 [10] |
| Dataset Splits | Yes | We follow the standards of [26] and test our method on the following degradations: sr4x-{bicubic, pool}, deblur-{uniform, gauss}, and JPEG restoration with 1k validation images. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models or memory used for experiments. |
| Software Dependencies | No | The paper mentions various models and frameworks but does not specify software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | For choosing the NFE and the hyper-parameters for each method, we closely abide to the original advised implementation: DDRM (20 NFE), DPS (1000 NFE), ΠGDM (100 NFE), DDNM (100 NFE), DDS (100 NFE). For I2SB along with the proposed method, we choose 100 NFE for JPEG restoration as we found it to be the most stable, and choose 1000 NFE for all other tasks. |