DDM$^2$: Self-Supervised Diffusion MRI Denoising with Generative Diffusion Models
Authors: Tiange Xiang, Mahmut Yurt, Ali B Syed, Kawin Setsompop, Akshay Chaudhari
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on 4 real-world in-vivo diffusion MRI datasets and show that our DDM2 demonstrates superior denoising performances ascertained with clinically-relevant visual qualitative and quantitative metrics. |
| Researcher Affiliation | Academia | Tiange Xiang, Mahmut Yurt, Ali B Syed, Kawin Setsompop & Akshay Chaudhari Stanford University {xtiange, myurt, alibsyed, kawins, akshaysc}@stanford.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | Yes | Our source codes are available at: https://github.com/Stanford MIMI/DDM2. |
| Open Datasets | Yes | To evaluate the generalizability of DDM2, additional experiments were done on 3 other publicly-available brain diffusion MRI datasets acquired with different protocols with less advanced MRI encoding for image SNR and resolution: (i) Sherbrooke 3-Shell dataset (Garyfallidis et al., 2014); (ii) Stanford HARDI (Rokem, 2016); (iii) Parkinson s Progression Markers Initiative (PPMI) dataset (Marek et al., 2011). |
| Dataset Splits | No | The paper does not explicitly provide training/validation/test dataset splits (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology). |
| Hardware Specification | Yes | All experiments were performed on RTX Ge Force 2080-Ti GPUs in Py Torch (Paszke et al., 2019). |
| Software Dependencies | No | The paper mentions "Py Torch" but does not specify a version number for it or any other software dependencies needed to replicate the experiment. |
| Experiment Setup | Yes | The Adam optimizer was used to optimize both networks with a fixed learning rate of 1e-4 and a batch size of 32. We trained Φ for 1e4 steps and F for 1e5 steps from scratch. |