Learning High-Order Relationships of Brain Regions

Authors: Weikang Qiu, Huangrui Chu, Selena Wang, Haolan Zuo, Xiaoxiao Li, Yize Zhao, Rex Ying

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our comprehensive experiments demonstrate the effectiveness of our model. Our model outperforms the state-of-the-art predictive model by an average of 11.2%, regarding the quality of hyperedges measured by CPM, a standard protocol for studying brain connections. ... We evaluate our methods on the open-source ABIDE dataset and the restricted ABCD dataset. We quantitatively evaluate our approach by a commonly used protocol for studying brain connections, CPM (Shen et al., 2017) (Appendix B), and show that our model outperforms the state-of-the-art deep learning models by an average of 11.2% on a comprehensive benchmark. Our post-hoc analysis demonstrates that hyperedges of higher degrees are considered more significant, which indicates the significance of high-order relationships in human brains.
Researcher Affiliation Academia 1Yale University, New Haven, USA 2University of British Columbia, Vancouver, Canada 3Vector Institute, Toronto, Canada. Correspondence to: Rex Ying <rex.ying@yale.edu>.
Pseudocode No No pseudocode or algorithm blocks are explicitly presented in the paper.
Open Source Code Yes Source code is available at https://github.com/ Graph-and-Geometric-Learning/ Hy BRi D.
Open Datasets Yes 1) Autism Brain Imaging Data Exchange (ABIDE) (Craddock et al., 2013) is an open-source dataset. ... 2) Adolescent Brain Cognitive Development (ABCD) (Casey et al., 2018) is one of the largest public f MRI datasets.
Dataset Splits Yes We randomly split the data into train, validation, and test sets in a stratified fashion. The split ratio is 8:1:1.
Hardware Specification Yes We train our model on a machine with an Intel Xeon Gold 6326 CPU and RTX A5000 GPUs.
Software Dependencies Yes Software See Table 5 for the software we used and the versions. Table 5: software version python 3.8.13 pytorch 1.11.0 cudatoolkit 11.3 numpy 1.23.3 ai2-tango 1.2.0 nibabel 4.0.2
Experiment Setup Yes Hyperparameter choices and other details can be found in Appendix E. Table 6: Hyperparameter choices. notation meaning value lr learning rate 1 10 3 K number of hyperedges 32 β trade-off coefficients information bottleneck 0.2 [h1, h2, h3] hidden sizes of the dim reduction MLP [32, 8, 1] B batch size 64