Compressed Sensing MRI Using a Recursive Dilated Network

Authors: Liyan Sun, Zhiwen Fan, Yue Huang, Xinghao Ding, John Paisley

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that the proposed RDN model achieves state-of-the-art performance in CS-MRI while using far fewer parameters than previously required. We compare our algorithm with several state-of-the-art deep and non-deep techniques on several brain MRI.
Researcher Affiliation Academia Fujian Key Laboratory of Sensing and Computing for Smart City, Xiamen University, Fujian, China. Correspondence: dxh@xmu.edu.cn. Department of Electrical Engineering, Columbia University, New York, NY, USA
Pseudocode No The paper does not contain any sections or figures explicitly labeled "Pseudocode" or "Algorithm", nor does it present structured code-like blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code for its methodology or a link to a code repository.
Open Datasets No The paper states: "Our training data consists of 2800 normalized real-valued brain MRI. The testing set consists of 61 brain MRI. We collected the images using a 3T MR scanner." However, it does not provide any information or links to suggest that this dataset is publicly available or an open dataset.
Dataset Splits No The paper mentions: "Our training data consists of 2800 normalized real-valued brain MRI. The testing set consists of 61 brain MRI." It specifies training and testing sets but does not mention a separate validation set or provide specific details about the data splitting methodology (e.g., percentages, cross-validation).
Hardware Specification Yes We train and test the algorithm using Tensor Flow for the Python environment on a NVIDIA Ge Force GTX 1080Ti with 11GB GPU memory.
Software Dependencies No The paper states: "We train and test the algorithm using Tensor Flow for the Python environment...". While it mentions "Tensor Flow" and "Python", it does not specify any version numbers for these or any other software components.
Experiment Setup Yes We use the Xavier method to initialize the network parameter and ADAM with momentum for parameter learning. We select the initial learning rate to be 0.001, the first-order momentum to be 0.9 and the second momentum to be 0.999. We set the weight decay regularization parameter to 0.0001 and the size of training batch to 8. 70000 stochastic iterations of training were required to train the RDN model.