A Conditional Normalizing Flow for Accelerated Multi-Coil MR Imaging

Authors: Jeffrey Wen, Rizwan Ahmad, Philip Schniter

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using fast MRI brain and knee data, we demonstrate fast inference and accuracy that surpasses recent posterior sampling techniques for MRI. Code is available at https://github. com/jwen307/mri_cnf
Researcher Affiliation Academia Jeffrey Wen 1 Rizwan Ahmad 2 Philip Schniter 1 1Dept. of ECE, The Ohio State University, Columbus, OH 43210, USA. 2Dept. of BME, The Ohio State University, Columbus, OH 43210, USA.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github. com/jwen307/mri_cnf
Open Datasets Yes We apply our network to two datasets: the fast MRI knee and fast MRI brain datasets (Zbontar et al., 2018).
Dataset Splits Yes For the knee data, we use the non-fat-suppressed subset, giving 17286 training and 3592 validation images. [...] With the brain data, we use the T2-weighted images and take the frst 8 slices of all volumes with at least 8 coils. This provides 12224 training and 3352 validation images.
Hardware Specification Yes The full training takes about 4 days on 4 Nvidia V100 GPUs. [...] When computing inference time for all methods, we use a single Nvidia V100 with 32GB of memory and evaluate the time required to generate one posterior sample.
Software Dependencies No The paper mentions software like "Py Torch (Paszke et al., 2019)", "Py Torch lightning (Falcon et al., 2019)", and "Framework for Easily Invertible Architectures (Fr EIA) (Ardizzone et al., 2018)" but does not specify version numbers for these software components.
Experiment Setup Yes We train the UNet to minimize the mean-squared error (MSE) from the nullspace projected targets {u(i)}N i=1 for 50 epochs with batch size 8 and learning rate 0.003. Then, we remove the fnal 1 1 convolution and jointly train gθ and hθ for 100 epochs to minimize the negative loglikelihood (NLL) loss of the nullspace projected targets. For the brain data, we use batch size 8 and learning rate 0.0003. For the knee data, we use batch size 16 with learning rate 0.0005. All experiments use the Adam optimizer (Kingma & Ba, 2015) with default parameters β1 = 0.9 and β2 = 0.999.