HUMUS-Net: Hybrid Unrolled Multi-scale Network Architecture for Accelerated MRI Reconstruction
Authors: Zalan Fabian, Berk Tinaz, Mahdi Soltanolkotabi
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we provide experimental results on our proposed architecture, HUMUS-Net. First, we demonstrate the reconstruction performance of our model on various datasets, including the large-scale fast MRI dataset. Then, we justify our design choices through a set of ablation studies. |
| Researcher Affiliation | Academia | Zalan Fabian Dept. of Electrical and Computer Engineering University of Southern California Los Angeles, CA zfabian@usc.edu Berk Tinaz Dept. of Electrical and Computer Engineering University of Southern California Los Angeles, CA tinaz@usc.edu Mahdi Soltanolkotabi Dept. of Electrical and Computer Engineering University of Southern California Los Angeles, CA soltanol@usc.edu |
| Pseudocode | No | The paper includes architectural diagrams (Figure 1, 2, 3, 4, 5) and detailed descriptions of the model components but does not present any formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our results are fully reproducible and the source code is available online 2 Code: https://github.com/MathFLDS/HUMUS-Net. |
| Open Datasets | Yes | More recently, the fast MRI dataset [Zbontar et al., 2019], the largest publicly available MRI dataset, has been gaining ground as a standard benchmark to evaluate MRI reconstruction methods. We investigate the performance of HUMUS-Net on three different datasets... fast MRI The fast MRI dataset [Zbontar et al., 2019] is the largest publicly available MRI dataset... Stanford 2D Next, we run experiments on the Stanford2D FSE [Cheng] dataset, a publicly available MRI dataset... Stanford 3D Finally, we evaluate our model on the Stanford Fullysampled 3D FSE Knees dataset [Sawyer et al., 2013], a public MRI dataset... |
| Dataset Splits | Yes | We train models both only on the training split, and also on the training and validation splits combined (additional 20% data) for the leaderboard. We randomly sample 80% of volumes as train data and use the rest for validation. We randomly generate 3 different train-validation splits this way to reduce variations in the presented metrics. |
| Hardware Specification | No | The paper mentions 'GPU memory (16 GB)' in the context of fitting models but does not specify any particular GPU model, CPU, or other hardware components used for the experiments. |
| Software Dependencies | No | The paper refers to hyperparameters in the supplementary material and source code but does not list specific software dependencies (e.g., Python, PyTorch, or other libraries) with version numbers in the main text. |
| Experiment Setup | No | For details on HUMUS-Net hyperparameters and training, we refer the reader to the supplementary. For E2E-Var Net, we use the hyperparameters speciļ¬ed in Sriram et al. [2020]. |