Robust Compressed Sensing MRI with Deep Generative Priors
Authors: Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G. Dimakis, Jon Tamir
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform retrospective under-sampling in all experiments, i.e., given fully-sampled k-space measurements from the NYU fast MRI [56, 94] and Stanford MRI [1] datasets, we apply sampling masks and evaluate the performance of all considered algorithms on the reconstructed data. |
| Researcher Affiliation | Academia | Ajil Jalal ECE, UT Austin ajiljalal@utexas.edu Marius Arvinte* ECE, UT Austin arvinte@utexas.edu Giannis Daras CS, UT Austin giannisdaras@utexas.edu Eric Price CS, UT Austin ecprice@cs.utexas.edu Alexandros G. Dimakis ECE, UT Austin dimakis@austin.utexas.edu Jonathan I. Tamir ECE, UT Austin jtamir@utexas.edu |
| Pseudocode | Yes | Putting everything together, our final algorithm is: for x0 Nc(0, I) and for all t = 0, , T 1, xt+1 xt + ηt f(xt; βt) + AH(y Axt) + p 2ηt ζt, ζt N(0; I). (4) |
| Open Source Code | Yes | Our code and models are available at: https://github.com/utcsilab/csgm-mri-langevin. |
| Open Datasets | Yes | We perform retrospective under-sampling in all experiments, i.e., given fully-sampled k-space measurements from the NYU fast MRI [56, 94] and Stanford MRI [1] datasets |
| Dataset Splits | No | The paper states, “Specifically, we train using T2-weighted images at a field strength of 3 Tesla for a total of 14,539 2D training slices.” and “We train the Mo DL and E2E-Var Net baselines from scratch on the same training dataset as our method...”, but does not provide explicit percentages or counts for a validation dataset split. |
| Hardware Specification | Yes | When benchmarked on an NVIDIA RTX 2080Ti GPU, our method takes 16 minutes and 0.95 GB of memory to reconstruct a high-resolution brain scan |
| Software Dependencies | Yes | We use the publicly available implementation from the BART toolbox [88, 86] |
| Experiment Setup | Yes | We train the Mo DL and E2E-Var Net baselines from scratch on the same training dataset as our method, at acceleration factors R = {3, 6} and equispaced under-sampling, with a supervised SSIM loss on the magnitude MVUE image, for 40 and 15 epochs, respectively, using a batch size of 1. For the Conv Decoder baseline... optimize the number of fitting iterations... We find that 10000 iterations are sufficient... |