Brain Network Transformer
Authors: Xuan Kan, Wei Dai, Hejie Cui, Zilong Zhang, Ying Guo, Carl Yang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiment results show clear improvements of our proposed BRAIN NETWORK TRANSFORMER on both the public ABIDE and our restricted ABCD datasets. The implementation is available at https://github.com/Wayfear/Brain Network Transformer. |
| Researcher Affiliation | Academia | 1Emory University 2Stanford University 3University of International Business and Economics {xuan.kan,hejie.cui,yguo2,j.carlyang}@emory.edu dvd.ai@stanford.edu 201957020@uibe.edu.cn |
| Pseudocode | No | The paper does not include pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The implementation is available at https://github.com/Wayfear/Brain Network Transformer. |
| Open Datasets | Yes | We conduct experiments on two real-world f MRI datasets. (a) Autism Brain Imaging Data Exchange (ABIDE): This dataset collects resting-state functional magnetic resonance imaging (rs-f MRI) data from 17 international sites, and all data are anonymous [6]. The used dataset contains brain networks from 1009 subjects, with 516 (51.14%) being Autism spectrum disorder (ASD) patients (positives). The region definition is based on Craddock 200 atlas [12]. As the most convenient open-source large-scale dataset, it provides generated brain networks and can be downloaded directly without permission request. (b) Adolescent Brain Cognitive Development Study (ABCD): This is one of the largest publicly available f MRI datasets with restricted access (a strict data requesting process needs to be followed to obtain the data) [8]. The data we use in the experiments are fully anonymized brain networks with only biological sex labels. After the quality control process, 7901 subjects are included in the analysis, with 3961 (50.1%) among them being female. The region definition is based on the HCP 360 ROI atlas [24]. |
| Dataset Splits | Yes | We randomly split 70% of the datasets for training, 10% for validation, and the remaining are utilized as the test set. |
| Hardware Specification | Yes | The model is trained on an NVIDIA Quadro RTX 8000. |
| Software Dependencies | No | The paper mentions using an 'Adam optimizer' but does not specify version numbers for any software libraries, frameworks, or programming languages used (e.g., PyTorch, Python, CUDA versions). |
| Experiment Setup | Yes | For experiments, we use a two-layer Multi-Head Self-Attention Module and set the number of heads M to 4 for each layer. We randomly split 70% of the datasets for training, 10% for validation, and the remaining are utilized as the test set. In the training process of BRAINNETTF, we use an Adam optimizer with an initial learning rate of 10 4 and a weight decay of 10 4. The batch size is set as 64. All models are trained for 200 epochs, and the epoch with the highest AUROC performance on the validation set is used for performance comparison on the test set. |