SEGAN: Structure-Enhanced Generative Adversarial Network for Compressed Sensing MRI Reconstruction
Authors: Zhongnian Li, Tao Zhang, Peng Wan, Daoqiang Zhang1012-1019
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that SEGAN is able to learn target structure information and achieves state-of-theart performance for CS-MRI reconstruction. We use a MICCAI 2013 grand challenge dataset and randomly split T1 weight MRI dataset, 16095 for training, 5033 for validation and 9854 for testing independently. The previous section describes our method SEGAN and gives some theoretical analysis. In this section we compare our algorithm with several state-of-art methods on two aspects: 1) Reconstruction Performance: we will examine the performance of SEGAN in CS-MRI reconstruction. 2) Testing Time: we will examine the time of generating an image in CS-MRI reconstruction. |
| Researcher Affiliation | Academia | Zhongnian Li, Tao Zhang, Peng Wan, Daoqiang Zhang College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China {zhongnianli,dqzhang}@nuaa.edu.cn |
| Pseudocode | Yes | Algorithm 1 Structure-Enhanced GAN |
| Open Source Code | No | The paper mentions using software provided by *other* authors (DAGAN and Re GAN) on GitHub, but it does not state that the code for SEGAN (the work described in this paper) is publicly available or provide a link to it. |
| Open Datasets | Yes | We use a MICCAI 2013 grand challenge dataset and randomly split T1 weight MRI dataset, 16095 for training, 5033 for validation and 9854 for testing independently. |
| Dataset Splits | Yes | We use a MICCAI 2013 grand challenge dataset and randomly split T1 weight MRI dataset, 16095 for training, 5033 for validation and 9854 for testing independently. |
| Hardware Specification | Yes | Training and testing the algorithm use tensorflow with the python environment on a NVIDIA Ge Fore GTX TITAN X with 12GB GPU memory. We test those methods under the environment of Intel Xeon CPU E5-1603 with 32GB memory. |
| Software Dependencies | No | The paper mentions 'tensorflow with the python environment' but does not specify version numbers for either TensorFlow or Python, nor does it list any other software dependencies with specific versions. |
| Experiment Setup | Yes | We used ADAM with momentum for parameter optimization and set the initial learning rate to be 0.0001, the first-order momentum to be 0.9 and the second momentum to be 0.999. The weight decays regularization parameter is 0.0001 and the batch size is 30. 30000 stochastic iterations of training were required to train the SEGAN. In the experiment, We empirically set λ1 = 10, λ2 = 1 and λ3 = 100 to maximize reconstruction performance. We select the polynomial kernel function as correlation function. The number of patches is fixed to 64 and α is set to 0.1. |