Long-range Brain Graph Transformer

Authors: Shuo Yu, Shan Jin, Ming Li, Tabinda Sarwar, Feng Xia

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on ABIDE and ADNI datasets demonstrate that ALTER consistently outperforms generalized state-of-the-art graph learning methods (including SAN, Graphormer, Graph Trans, and LRGNN) and other graph learning based brain network analysis methods (including FBNETGEN, Brain Net GNN, Brain GNN, and Brain NETTF) in neurological disease diagnosis.
Researcher Affiliation Academia 1School of Computer Science and Technology, Dalian University of Technology, China 2Key Laboratory of Social Computing and Cognitive Intelligence (Dalian University of Technology), Ministry of Education, China 3School of Software, Dalian University of Technology, China 4Zhejiang Institute of Optoelectronics, China 5Zhejiang Key Laboratory of Intelligent Education Technology and Application, Zhejiang Normal University, China 6School of Computing Technologies, RMIT University, Australia
Pseudocode No The paper provides a detailed description of the proposed framework and its steps, but it does not include a formal 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes The implementation is available at https://github.com/yushuowiki/ALTER.
Open Datasets Yes 1) Autism Brain Imaging Data Exchange (ABIDE)1, which contains 519 Autism spectrum disorder (ASD) samples and 493 normal controls. 2) Alzheimer s Disease Neuroimaging Initiative (ADNI)2, which contains 54 Alzheimer s disease (AD) samples and 76 normal controls. 1http://preprocessed-connectomes-project.org/abide/ 2https://adni.loni.usc.edu/
Dataset Splits Yes For all datasets, we randomly divide the training set, evaluation set and test set by the ratio of 7 : 1 : 2.
Hardware Specification Yes All experiments are implemented using the Py Torch framework, and computations are performed on one Tesla V100.
Software Dependencies No The paper mentions 'Py Torch framework' but does not provide specific version numbers for PyTorch or any other software dependencies such as Python, CUDA, or other libraries.
Experiment Setup Yes In the proposed method, we set the number of steps K for adaptive random walk to 16. The number of nonlinear mapping layers L and attention heads M of the self-attention module are set to 2 and 4, respectively. For all datasets, we randomly divide the training set, evaluation set and test set by the ratio of 7 : 1 : 2. In the train processing, we adopt Adam as optimizer and Cos LR as scheduler by a initial learning rate of 10 4 and a weight decay of 10 4. The batch size is set to 16 and the epoch is set to 200.