Learning Robust Hierarchical Patterns of Human Brain across Many fMRI Studies

Authors: Dushyant Sahoo, Christos Davatzikos

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on simulated datasets display that the proposed method can estimate components with higher accuracy and reproducibility, while preserving age-related variation on a multi-center clinical data set.
Researcher Affiliation Academia Dushyant Sahoo Department of Electrical Engineering University of Pennsylvania sadu@seas.upenn.edu Christos Davatzikos Department of Radiology University of Pennsylvania christos.davatzikos@uphs.upenn.edu
Pseudocode No The paper states 'Complete algorithm and the details about the optimization are described in Appendix B.', but the appendix content is not provided in the given text.
Open Source Code No All the code is implemented in MATLAB and will be released upon publication.
Open Datasets Yes We collected functional MRI data from 5 different multi-center imaging studies1) Baltimore Longitudinal Study of Aging (BLSA) [32, 33], the Coronary Artery Risk Development in Young Adults study (CARDIA) [34], UK Bio Bank (UKBB) [35], Open access series of imaging studies (OASIS) [36] and Aging Brain Cohort Study (ABC) from Penn Memory Center [37].
Dataset Splits Yes Optimal value of hyperparameters α, β, µ and τ1 are selected from [0.1, 1], [1, 5], [0.1, 0.5, 1] and 10[ 2:2]. The criterion for choosing the best hyperparameter is maximum split-sample reproducibility. We performed a 5 fold cross-validation using SVM with RBF kernel.
Hardware Specification Yes All the experiments were run on a four i7-6700HQ CPU cores single ubuntu machine.
Software Dependencies No The paper states 'All the code is implemented in MATLAB' but does not provide specific version numbers for MATLAB or any other software dependencies used in the experiments.
Experiment Setup Yes Optimal value of hyperparameters α, β, µ and τ1 are selected from [0.1, 1], [1, 5], [0.1, 0.5, 1] and 10[ 2:2]. We used a feed-forward neural network for the classification model with two hidden layers. The networks contain the following layers: a fully connected layer with 50 hidden unites, dropout layer with rate 0.2, Re LU, a fully-connected layer with 4 hidden units and a softmax layer.