Null Space Matters: Range-Null Decomposition for Consistent Multi-Contrast MRI Reconstruction

Authors: Jiacheng Chen, Jiawei Jiang, Fei Wu, Jianwei Zheng

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The quantitative and qualitative results show that our proposal outperforms most cuttingedge methods by a large margin. Codes will be released on https://github.com/chenjiachengzzz/RNU. Experiments Datasets and Implementation Details Results and Analysis Ablation Experiments
Researcher Affiliation Academia College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China zjw@zjut.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Codes will be released on https://github.com/chenjiachengzzz/RNU.
Open Datasets Yes Datasets: The classical IXI dataset and the currently largest fast MRI (Zbontar et al. 2018) dataset are employed for performance evaluation.
Dataset Splits No The paper mentions 'training data' and uses 'validation' as a term for evaluating performance, but it does not explicitly provide specific dataset split information (percentages, sample counts, or detailed splitting methodology) for training, validation, and test sets.
Hardware Specification Yes The proposed RNU is implemented using Py Torch and evaluated with NVIDIA 3090 GPU.
Software Dependencies No The paper mentions 'Py Torch' as implementation software, but it does not provide specific version numbers for software dependencies.
Experiment Setup Yes The Adam optimizer is utilized for model training, with an initial learning rate lr = 10 4 that is gradually decayed to 10 6 over 50 epochs. The batch size is set as 4. To facilitate a better generalization, the training data are randomly augmented by flipping horizontally or vertically and rotating at different angles. L1 loss is used to optimize the network. To ensure a fair comparison, all competing approaches are trained using the finely tuned parameter settings. Unless specified otherwise, the stage number K is 8.