Explainable Brain Age Prediction using coVariance Neural Networks

Authors: Saurabh Sihag, Gonzalo Mateos, Corey McMillan, Alejandro Ribeiro

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we leverage co Variance neural networks (VNN) to propose an explanation-driven and anatomically interpretable framework for brain age prediction using cortical thickness features. Specifically, our brain age prediction framework extends beyond the coarse metric of brain age gap in Alzheimer s disease (AD) and we make two important observations: (i) VNNs can assign anatomical interpretability to elevated brain age gap in AD by identifying contributing brain regions, (ii) the interpretability offered by VNNs is contingent on their ability to exploit specific eigenvectors of the anatomical covariance matrix.
Researcher Affiliation Academia Saurabh Sihag , Gonzalo Mateos , Corey Mc Millan , Alejandro Ribeiro University of Pennsylvania, Philadelphia, PA. University of Rochester, Rochester, NY.
Pseudocode No The paper describes the VNN architecture and equations but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Code: https://github.com/sihags/VNN_Brain_Age
Open Datasets Yes OASIS-3 dataset is publicly available and hosted on central.xnat.org. Access to OASIS-3 dataset may be requested through https://www.oasis-brains.org/. The MRI images for the 3.0T standardized ADNI-1 dataset at the baseline visit were downloaded from https://adni.loni.usc.edu/.
Dataset Splits Yes The HC group was split into a 90/10 training/test split, and the covariance matrix was set to be the anatomical covariance evaluated from the training set. A part of the training set was used as a validation set and the other used for training the VNN model. The VNN model with the best minimum mean squared error performance on the remaining 61 samples in the training set (which acted as a validation set) was included in the set of nominal models for this permutation of the training set.
Hardware Specification No The paper does not provide specific hardware details (like GPU/CPU models, memory, or specific computing environments) used for running the experiments.
Software Dependencies Yes The loss was optimized using batch stochastic gradient descent with Adam optimizer available in Py Torch library [75] for up to 100 epochs. The batch size was 78 (determined via optuna package [44]).
Experiment Setup Yes The hyperparameters for the VNN architecture and learning rate of the optimizer were chosen according to a hyperparameter search procedure [44]. The VNN model had L = 2-layers with a filter bank, such that we had F = 5, and 6 filter taps in the first layer and 10 filter taps in the second layer. The learning rate for the Adam optimizer was set to 0.059. The number of learnable parameters for this VNN was 290. The batch size was 78 (determined via optuna package [44]).