coVariance Neural Networks

Authors: Saurabh Sihag, Gonzalo Mateos, Corey McMillan, Alejandro Ribeiro

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we discuss our experiments on different datasets. Primarily, we evaluate VNNs on a regression problem on different neuroimaging datasets curated at University of Pennsylvania, where we regress human chronological age (time since birth) against cortical thickness data.
Researcher Affiliation Academia Saurabh Sihag University of Pennsylvania sihags@pennmedicine.upenn.edu Gonzalo Mateos University of Rochester gmateosb@ece.rochester.edu Corey Mc Millan University of Pennsylvania cmcmilla@pennmedicine.upenn.edu Alejandro Ribeiro University of Pennsylvania aribeiro@seas.upenn.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No Data and code availability statement is available in Appendix A. (The Appendix A states: All data and code will be made available at X upon publication, which means it is not currently available.)
Open Datasets No ABC Dataset: ABC dataset consists of the cortical thickness data collected from a heterogeneous population of n = 341 subjects (mean age = 68.1 years, standard deviation = 13) that consists of healthy adults, and subjects with mild cognitive impairment or Alzheimer s disease. Multi-resolution FTDC Datasets: FTDC Datasets consist of the cortical thickness data from n = 170 healthy subjects (mean age = 64.26 years, standard deviation = 8.26). The ABC dataset was provided by the Penn Alzheimer s Disease Research Center (ADRC; NIH AG072979) at University of Pennsylvania. The MRI data for FTDC datasets were provided by the Penn Frontotemporal Degeneration Center (NIH AG066597). Cortical thickness data were made available by Penn Image Computing and Science Lab at University of Pennsylvania. (The paper does not provide a direct link, DOI, or specify that these curated datasets are publicly available with concrete access information beyond institutional origin.)
Dataset Splits Yes We randomly split ABC dataset into a 90/10 train/test split. The training set is randomly subdivided into subsets of 273 and 34 samples, such that, the VNN is trained with respect to the mean squared error loss between the predicted age and the true age in 273 samples. ... The VNN model with the best minimum mean squared error performance on the remaining 34 samples (which acts as a validation set) is included in the set of nominal models for this permutation of the training set.
Hardware Specification Yes The experiments were run on a 12GB Nvidia Ge Force RTX 3060 GPU.
Software Dependencies No The loss is optimized using batch stochastic gradient descent with Adam optimizer available in Py Torch library [36] for up to 100 epochs. Regression is implemented using sklearn package in python [37]. (No specific version numbers are specified for PyTorch or scikit-learn.)
Experiment Setup Yes The VNN consists of 2 layers with 2 filter taps each, a filter bank of 13 m-dimensional outputs per layer for m = 104 dimensions of the input data, and a readout layer that calculates the unweighted mean of the outputs at the last layer to form an estimate for age. The hyperparameters for the VNN architecture and learning rate of the optimizer in this experiments and all subsequent VNN experiments in this section are chosen by a hyperparameter optimization framework called Optuna [35]. ... The learning rate for the optimizer is set to 0.0151.