Learning Dynamic Graph Representation of Brain Connectome with Spatio-Temporal Attention

Authors: Byung-Hoon Kim, Jong Chul Ye, Jae-Jin Kim

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the HCP-Rest and the HCP-Task datasets demonstrate exceptional performance of our proposed method.
Researcher Affiliation Academia Byung-Hoon Kim Department of Psychiatry Institute of Behavioral Sciences in Medicine College of Medicine, Yonsei University egyptdj@yonsei.ac.kr Jong Chul Ye Department of Bio/Brain Engineering Kim Jaechul Graduate School of AI KAIST jong.ye@kaist.ac.kr Jae-Jin Kim Department of Psychiatry Institute of Behavioral Sciences in Medicine College of Medicine, Yonsei University jaejkim@yonsei.ac.kr
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/egyptdj/stagin
Open Datasets Yes Publicly available2 f MRI data from the HCP S1200 release [45] was used for our experiments.
Dataset Splits Yes We performed 5-fold stratified cross-validation of the dynamic graphs from the dataset, and report mean and standard deviation across the folds.
Hardware Specification Yes Experiments were performed on a workstation with two NVIDIA Ge Force GTX 1080 Ti GPUs.
Software Dependencies No The paper mentions various models (e.g., GIN, GCN, Transformer encoder) and uses terms like 'Pytorch Geometric' in references, but does not specify exact version numbers for software dependencies like Python, PyTorch, or other libraries.
Experiment Setup Yes We set the number of layers K = 4, embedding dimension D = 128, window length Γ = 50, window stride S = 3, and regularization coefficient λ = 1.0 10 5. ... Dropout rate 0.5 is applied to the final dynamic graph representation h Gdyn, and rate 0.1 is applied to the attention vectors zspace and ztime during training. One-cycle learning rate policy is employed, which the learning rate is gradually increased from 0.0005 to 0.001 during the early 20% of the training, and gradually decreased to 5.0 10 7 afterwise. Thirty training epochs were run for the HCP-Rest dataset with minibatch size 3, while ten epochs were run with minibatch size 16 for the HCP-Task dataset.