Self-Promoted Clustering-based Contrastive Learning for Brain Networks Pretraining

Authors: Junbo Ma, Caixuan Luo, Jia Hou, Kai Zhao

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments are conducted on an open-access schizophrenic dataset, demonstrating the effectiveness of the proposed method.
Researcher Affiliation Academia Junbo Ma1,2 , Caixuan Luo3 , Jia Hou2 and Kai Zhao4 1School of Communication Engineering, Hangzhou Dianzi University, Hangzhou 310018, China 2Lishui Institute of Hangzhou Dianzi University, Hangzhou Dianzi University, Hangzhou 310018, China 3Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin 541000, China 4Department of Neurosurgery, First Medical Center, Chinese PLA General Hospital, Beijing 100853, China
Pseudocode Yes Algorithm 1 SPCCL pre-training Data pool: Mix training set and extra healthy subjects. Input: Random select M contrastive pairs. Output: Graph representations 1: Let t = 0. 2: while t < T or not converge do 3: for pairs in M do 4: do Siamese GCN 5: do Self-supervised readout 6: end for 7: do Contrastive clustering 8: backpropagate with Loss = Losssup + λLossconsist 9: promote M contrastive pairs with the centroids. 10: end while 11: return Graph representations of data
Open Source Code No No mention of open-source code or links to repositories.
Open Datasets Yes The schizophrenic dataset utilized in this study is from an open-access dataset that includes two types of brain network modalities: structural and functional connectomes. It consists of MRI data acquired from 27 schizophrenic patients, and 27 matched healthy adults [Vohryzek et al., 2020]. An additional 70 healthy adults MRI data are used to do the pretraining [Griffa et al., 2019]
Dataset Splits Yes To ensure unbiased performance evaluation, we employ a 6-fold cross-validation strategy during the training process. This involves randomly dividing the dataset into three equal parts, where one-third of the samples from each class are selected as the testing set, while the remaining two-thirds serve as the training set.
Hardware Specification No All subjects underwent scanning using a 3 Tesla Siemens Trio scanner equipped with a 32-channel head coil.
Software Dependencies No The paper mentions specific model components like Siamese GCN and Bi-directional LSTM but does not provide specific software versions for libraries, frameworks, or programming languages used for implementation.
Experiment Setup No To ensure unbiased performance evaluation, we employ a 6-fold cross-validation strategy during the training process. This involves randomly dividing the dataset into three equal parts, where one-third of the samples from each class are selected as the testing set, while the remaining two-thirds serve as the training set.