Local Spatiotemporal Representation Learning for Longitudinally-consistent Neuroimage Analysis

Authors: Mengwei Ren, Neel Dey, Martin Styner, Kelly Botteron, Guido Gerig

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Benchmarked across various segmentation tasks, the proposed framework outperforms both well-tuned randomly-initialized baselines and current self-supervised techniques designed for both i.i.d. and longitudinal datasets. These improvements are demonstrated across both longitudinal neurodegenerative adult MRI and developing infant brain MRI and yield both higher performance and longitudinal consistency.
Researcher Affiliation Academia Mengwei Ren New York University mengwei.ren@nyu.edu Neel Dey New York University neel.dey@nyu.edu Martin A. Styner UNC-Chapel Hill styner@cs.unc.edu Kelly N. Botteron WUSTL School of Medicine botteronk@wustl.edu Guido Gerig New York University gerig@nyu.edu
Pseudocode No The paper describes its methodology in prose and does not include any pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/mengweiren/longitudinal-representation-learning.
Open Datasets Yes We conduct experiments on two de-identified longitudinal neuroimaging datasets, and specifically design three tasks to benchmark different extents of biomedical domain gaps between the finetuning and testing data. ... OASIS3 [34] is a publicly-available dataset consisting of 1639 brain MRI scans of 992 longitudinally imaged subjects.
Dataset Splits Yes For both datasets, we perform a train/validation/test split on a subject-wise basis with 70%, 10% and 20% of the participants. The validation set is used for model and hyperparameter selection and results are reported on a held-out test set.
Hardware Specification Yes All networks are trained with the Adam optimizer (β1 = 0.9 during pretraining and β1 = 0.5 during finetuning and β2 = 0.999 in both settings) on a single Nvidia RTX8000 GPU (45GB v RAM).
Software Dependencies No The paper mentions using the Adam optimizer, but does not provide specific version numbers for software libraries like PyTorch, TensorFlow, or Python.
Experiment Setup Yes We use a batch size of 3 crops and an initial learning rate of 2 10 4 for both pretraining and finetuning. All networks are trained with the Adam optimizer (β1 = 0.9 during pretraining and β1 = 0.5 during finetuning and β2 = 0.999 in both settings)... The networks are pretrained for a maximum of 30, 000 steps and the best model based on validation performance is used for fine-tuning for another 35, 000 steps, alongside linear learning rate decay. All experiments are run on a fixed random seed due to limited computational budgets. Based on the ablation analysis in Tab. 2, we empirically choose λ = 1, α = 10, γ = 1e 3, β = 100 for all datasets, and use µ = 10 2 for OASIS3, µ = 10 3 for IBIS.