Zero-Shot Self-Supervised Learning for MRI Reconstruction

Authors: Burhaneddin Yaman, Seyed Amir Hossein Hosseini, Mehmet Akcakaya

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We performed experiments on publicly available fully-sampled multi-coil knee and brain MRI from fast MRI database (Knoll et al., 2020a). Figure 3a and b show reconstruction results for Cor-PD knee and Ax-FLAIR brain MRI datasets in this setting. Table 1 shows the average SSIM and PSNR values on 30 test slices.
Researcher Affiliation Academia Department of Electrical&Computer Engineering, University of Minnesota Center for Magnetic Resonance Research, University of Minnesota {yaman013, hosse049, akcakaya}@umn.edu
Pseudocode No The paper describes the methodology using text and equations but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about the release of its source code or provide a link to a code repository.
Open Datasets Yes We performed experiments on publicly available fully-sampled multi-coil knee and brain MRI from fast MRI database (Knoll et al., 2020a).
Dataset Splits Yes The proposed approach partitions the available measurements from a single scan into three disjoint sets. Two of these sets are used to enforce data consistency and define loss during training for selfsupervision, while the last set serves to self-validate, establishing an early stopping criterion. The k-space self-validation set Γ was selected from the acquired measurements Ωusing a uniformly random selection with |Γ|/|Ω| = 0.2.
Hardware Specification Yes The computation times were measured on the machines equipped with 4 NVIDIA V100 GPUs (each with 32 GB memory).
Software Dependencies No The paper mentions specific methods and tools like "Res Net" and "ESPIRi T" with citations, but does not specify versions for general software dependencies or programming languages.
Experiment Setup Yes All PG-DLR approaches were trained end-to-end using 10 unrolled iterations. End-to-end training was performed with a normalized ℓ1-ℓ2 loss (Adam optimizer, LR = 5 10 4, batch size = 1) (Yaman et al., 2020).