Super-Resolution and Inpainting with Degraded and Upgraded Generative Adversarial Networks

Authors: Yawen Huang, Feng Zheng, Danyang Wang, Junyu Jiang, Xiaoqian Wang, Ling Shao

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our comparative evaluation demonstrates that the effectiveness of the proposed method on different brain MRI datasets. In addition, our method outperforms many existing super-resolution and inpainting approaches. We evaluate the proposed method on two publicly available datasets: IXI 1, and HCP 2, which include real acquired LQ/HQ data. The quantitative results are listed in Table 1.
Researcher Affiliation Collaboration Yawen Huang1,2 , Feng Zheng3,4 , Danyang Wang1,2 , Junyu Jiang1,2 , Xiaoqian Wang5 , Ling Shao6 1Malong Technologies 2Shenzhen Malong Artificial Intelligence Research Center 3Depatment of Computer Science and Technology, Southern University of Science and Technology 4Research Institute of Trustworthy Autonomous Systems 5Purdue University 6Inception Institute of Artificial Intelligence
Pseudocode No The paper describes the network architecture and loss functions, but does not provide any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements or links to open-source code for the described methodology.
Open Datasets Yes We evaluate the proposed method on two publicly available datasets: IXI 1, and HCP 2, which include real acquired LQ/HQ data. Specifically, the IXI dataset contains 578 healthy subjects acquired by a Philips 3T/1.5T system and a GE 1.5T system. One branch of the HCP dataset has a total of 200 subjects acquired via a Siemens 3T scanner. 1http://brain-development.org/ixi-dataset 2https://www.humanconnectome.org
Dataset Splits No The paper states: "We split the datasets into 500 (IXI) and 120 (HCP) for training, 78 (IXI) and 80 (HCP) for testing." It explicitly mentions training and testing splits, but no separate validation split.
Hardware Specification No The paper does not provide any specific hardware specifications (e.g., GPU/CPU models) used for running the experiments.
Software Dependencies No The paper mentions using a "VGG19 network" and "Adam" optimizer, but it does not specify version numbers for any software dependencies or libraries.
Experiment Setup Yes We use Adam with 105 iterations and a learning rate of 10 4, which is decayed by a factor of 2 every 2 105 minibatch updates. For the parameters, we set α = 10, δ = 10, β = 0.1, λ = 1.