Reconstructing Perceptive Images from Brain Activity by Shape-Semantic GAN

Authors: Tao Fang, Yu Qi, Gang Pan

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that Shape-Semantic GAN improves the reconstruction similarity and image quality, and achieves the state-of-the-art image reconstruction performance.
Researcher Affiliation Academia 1 College of Computer Science and Technology, Zhejiang University 2 State Key Lab of CAD&CG, Zhejiang University 3 The First Affiliated Hospital, College of Medicine, Zhejiang University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper states 'We implemented the image generator using the Py Torch framework and modified the image translation model provided by [11].' but does not provide a link to their own source code or explicitly state its release.
Open Datasets Yes We make use of a publicly available benchmark dataset from [23].
Dataset Splits Yes The corresponding stimulus images were selected from Image Net including 1200 training images and 50 test images from 150 and 50 categories separately. ... Forty samples in the origin training set are reserved for validation and the rest are used for the decoders training in this experiment.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper mentions 'Py Torch framework' but does not specify a version number or other software dependencies with their versions.
Experiment Setup Yes In GAN training, minibatch SGD is used and Adam solver is employed to optimize the parameters with momentum β1 = 0.9 and β2 = 0.999. The initial learning rate is 2 × 10−4 and 10 samples are input in a batch. The weights of individual loss terms affect the quality of the reconstructed image. In our experiments, we set λimg = 100 to make a balance between the results sharpness and similarity with stimulus images. The image generator is trained for 200 epoch totally with the learning rate decay occurring at 120 epoch.