Neurodegenerative Brain Network Classification via Adaptive Diffusion with Temporal Regularization

Authors: Hyuna Cho, Jaeyoon Sim, Guorong Wu, Won Hwa Kim

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The superiority of our method is validated on two neurodegenerative disease benchmarks for graph classification: Alzheimer s Disease Neuroimaging Initiative (ADNI) and Parkinson s Progression Markers Initiative (PPMI) datasets.
Researcher Affiliation Academia 1Pohang University of Science and Technology (POSTECH), South Korea 2University of North Carolina at Chapel Hill, USA. Correspondence to: Won Hwa Kim <wonhwa@postech.ac.kr>.
Pseudocode No The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor any structured, code-like procedural steps.
Open Source Code No The paper does not provide any specific statements about releasing source code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes We used two independent neurodegenerative brain network datasets: Alzheimer s Disease Neuroimaging Initiative (ADNI) and Parkinson s Progression Markers Initiative (PPMI), whose demographics are given in Table 1 and 2. ADNI study (Mueller et al., 2005) provides the largest public Alzheimer s Disease (AD) data with diverse biomarkers from multi-modal imaging. PPMI study (Marek et al., 2011) provides public biomarkers for Parkinson s Disease (PD) progression.
Dataset Splits Yes All experiments were performed with 5-fold cross-validation and the resultant accuracy, precision, recall, and specificity from all folds were averaged to avoid any biases.
Hardware Specification Yes To train AGT, we utilized Py Torch framework with a single NVIDIA RTX 6000 Ada Generation GPU.
Software Dependencies No The paper mentions using the 'Py Torch framework' but does not specify a version number for it or any other software dependency.
Experiment Setup Yes In Table 7, we provide details of the implementation settings of AGT. We performed a grid search for all baselines and AGT to choose the best number of hidden units in {8, 16, 32, 64}, and a learning rate in {0.1, 0.01, 0.001, 0.0001}. As the temporal regularization Rtemp requires a sufficient amount of data for measuring accurate group distance, we used the maximum size of batch comprising all training data. For the scale initialization, N number of scales were initialized randomly within a range [-2, 2]. Table 7 lists specific values for Optimizer (Adam), Learning rate, Weight for Rtemp, Weight decay, Batch size, Number of epochs, Hidden dimension of GCN, Number of GCN layers, and Number of f() layers.