Surreal-GAN:Semi-Supervised Representation Learning via GAN for uncovering heterogeneous disease-related imaging patterns

Authors: Zhijian Yang, Junhao Wen, Christos Davatzikos

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We first validated the model through semi-synthetic experiments, and then demonstrated its potential in capturing biologically plausible imaging patterns in Alzheimer s disease (AD).
Researcher Affiliation Academia Zhijian Yang1,2, Junhao Wen1 and Christos Davatzikos1 1Center for Biomedical Image Computing and Analytics, University of Pennsylvania 2Graduate Group in Applied Mathematics and Computational Science, University of Pennsylvania
Pseudocode Yes Detailed training procedure of Surreal-GAN is disclosed by Algorithm 1.
Open Source Code No The paper does not provide concrete access to source code for the methodology described. There is no mention of a repository link, explicit code release statement, or code in supplementary materials.
Open Datasets Yes For AD, we defined the CN group (N=850) to be subjects with Mini-mental state examination (MMSE) scores above 29, and the PT group (N=2204) as subjects diagnosed as mild cognitive impairment (MCI) or AD at baseline.
Dataset Splits Yes A five fold cross validation were run with three different models.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions software components like "ADAM optimizer" and "AFNI-3dttest" but does not provide specific version numbers for these or other ancillary software dependencies required for replication.
Experiment Setup Yes ADAM optimizer(Kingma & Ba (2014)) was used with a learning rate (lr) 4 10 5 for Discriminator and 2 10 4 for transformation function f and clustering function g. β1 and β2 are set to be 0.5 and 0.999 respectively. For hyper-parameters, we set γ = 6, κ = 80, ζ = 80, µ = 500, η = 6... the batch size was set to be 1/8 of the PT data sample sizes. The model was trained for at least 100000 iterations...