MRI Reconstruction with Interpretable Pixel-Wise Operations Using Reinforcement Learning

Authors: Wentian Li, Xidong Feng, Haotian An, Xiang Yao Ng, Yu-Jin Zhang792-799

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on MICCAI dataset of brain tissues and fast MRI dataset of knee images. Our proposed method performs favorably against previous approaches.
Researcher Affiliation Academia Wentian Li, Xidong Feng, Haotian An, Xiang Yao Ng, Yu-Jin Zhang Tsinghua University Beijing, China li-wt17@mails.tsinghua.edu.cn
Pseudocode No The paper describes its algorithm and model architecture in text and diagrams (e.g., Figure 2) but does not provide structured pseudocode or an algorithm block.
Open Source Code Yes Our code is available5. 5https://github.com/wentianli/MRI RL
Open Datasets Yes We conduct experiments on MICCAI 2013 Grand Challenge dataset3 of T1-weighted MR images of brain tissues. ... We also evaluate our method on the single-coil subset of fast MRI dataset (Zbontar et al. 2018) of knee images. ... 3https://my.vanderbilt.edu/masi/workshops/
Dataset Splits Yes We follow (Yang et al. 2017) and include 16095 training images and 50 test images. ... We train our model on the training set and report our results on the first 1000 images of the validation set.
Hardware Specification No The paper does not specify any hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper mentions the use of "Adam and SGD optimizers" and that it "implemented Unet", but does not provide specific version numbers for any software dependencies, libraries, or frameworks.
Experiment Setup Yes We set batch size to 24. Adam and SGD optimizers are used for the discrete and continuous module respectively. The learning rate is 1 × 10−3 initially and will decay linearly to 5 × 10−4. The network will firstly conduct warmup training in which only discrete policy module is trained. The whole number of training iterations is 10k and the alternation between two modules happens every 2 iterations. Empirically, we set the loss weights λ1 = 0.25, λ2 = 0.1, and λ3 = 0.5.