Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse Problems

Authors: Hyungjin Chung, Suhyeon Lee, Jong Chul Ye

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on two distinguished applications accelerated MRI, and 3D CT reconstruction. For the former, we follow the evaluation protocol of Chung & Ye (2022) and test our method on the fast MRI knee dataset (Zbontar et al., 2018) on diverse sub-sampling patterns.
Researcher Affiliation Academia 1 Dept. of Bio & Brain Engineering, KAIST, 2 Kim Jae Chul Graduate School of AI, KAIST {hj.chung, suhyeon.lee, jong.ye}@kaist.ac.kr
Pseudocode Yes In the following tables, we list all the DDS algorithms used throughout the manuscript. For simplicity, we define CG(A, y, x, M) to be running M steps of conjugate gradient steps with initialization x. For completeness, we include a pseudo-code of the CG method in Algorithm. 1 that is used throughout the work.
Open Source Code Yes Code is available at https://github.com/ HJ-harry/DDS
Open Datasets Yes We conduct all PI experiments with fast MRI knee dataset (Zbontar et al., 2018). ... AAPM 2016 CT low-dose grand challenge data leveraged in Chung et al. (2022a; 2023b) is used. ... All medical data used in our experiments were publicly available and fully anonymized, ensuring the utmost respect for patient confidentiality.
Dataset Splits No While the paper mentions using a 'validation dataset' from which the test set is selected for MRI and finding parameters via 'grid search on 50 validation images' for TV, it does not provide explicit details about the training/validation/test splits (e.g., exact percentages or counts for training and validation sets) for its main experiments.
Hardware Specification Yes on a single commodity GPU (RTX 3090).
Software Dependencies No The paper mentions software components such as 'torch-radon (Ronchetti, 2020) package', 'U-Net implementation from ADM (Dhariwal & Nichol, 2021)', and 'sigpy.mri.app.Total Variation', but it does not specify exact version numbers for these or other key software dependencies like Python, PyTorch, or CUDA.
Experiment Setup Yes For all proposed methods, we employ M = 5, η = 0.15 for 19 NFE, η = 0.5 for 49 NFE, η = 0.8 for 99 NFE unless specified otherwise. ... train each model for 1M iterations with the batch size of 4, initial learning rate of 1e 4 on a single RTX 3090 GPU.