A scalable generative model for dynamical system reconstruction from neuroimaging data

Authors: Eric Volkmann, Alena Brändle, Daniel Durstewitz, Georgia Koppe

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate its efficiency in reconstructing dynamical systems, including their state space geometry and long-term temporal properties, from just short BOLD time series. ... We trained 100 models on each of these 6 data sets. The following models were compared: the conv SSM trained via SGD+GTF, the conv SSM trained via SGD and no GTF, the standard SSM trained via SGD+GTF, and MINDy, a recently published method for DSR in f MRI [62]. ... We finally tested conv SSM on empirical data, for which we chose the LEMON study...
Researcher Affiliation Academia 1Department of Theoretical Neuroscience, Central Institute of Mental Health (CIMH), Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany 2Institute for Machine Learning, Johannes Kepler University, Linz, Austria 3Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany 4Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany 5Hector Institute for AI in Psychiatry & Dept. for Psychiatry and Psychotherapy, CIMH 6Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
Pseudocode Yes The full inversion algorithm is provided in Algorithm 1 with additional information given in Appx. A.6. ... Algorithm 2 VISUSHRINK algorithm
Open Source Code Yes Code for the conv SSM is available at https://github.com/humml-lab/GTF-Conv SSM.
Open Datasets Yes We finally tested conv SSM on empirical data, for which we chose the LEMON study ( Leipzig Study for Mind-Body-Emotion Interactions ) as a publicly available data set. This data set was collected at the Max-Planck-Institute Leipzig [4]...
Dataset Splits Yes The time series were split 3 : 1 into training (Ttrain = 489) and test (Ttest = 163) set, respectively. ... We trained 10 conv SSM models on the first 375 time steps of each of these virtual experiments, treating the left out 125 time points as pseudo-empirical test set and call the last 5,000 time points of the entire trajectory (i.e., time steps 5,001-10,000 of the full simulation set) the ground truth (GT) test set
Hardware Specification Yes Experiments were performed on a standard notebook with Intel i5-8250U 1,60 GHz CPU and 8GB RAM. ... All experiments were run on a system with a Xeon Gold 6248 CPU and 768 GB of RAM.
Software Dependencies No The paper lists software components such as PyTorch, neurolib, Dynamical Systems.jl, lfads-torch, and ssm, and uses RADAM as an optimizer, but it does not specify version numbers for these software dependencies. For example, 'implemented in Java 7' by itself is not enough. The paper mentions 'neurolib: A simulation framework for whole-brain neural mass modeling.' [12] and 'Dynamical Systems.jl Julia package, ([15])' but without specific versions.
Experiment Setup Yes Table 4: Hyperparameter settings for the different experiments. Varies means the respective hyperparameter was varied in the experiment. [lists many specific hyperparameters and their values like] latent_dim 3 16 16 gaussian_noise_level 0.05 0.05 0.05 optimizer RADAM RADAM RADAM start_lr 0.001 0.001 0.001 batch_size 16 16 16...