Reconstructing Nonlinear Dynamical Systems from Multi-Modal Time Series

Authors: Daniel Kramer, Philine L Bommer, Carlo Tombolini, Georgia Koppe, Daniel Durstewitz

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show on nonlinear DS benchmarks that our algorithms can efficiently compensate for too noisy or missing information in one data channel by exploiting other channels, and demonstrate on experimental neuroscience data how the algorithm learns to link different data domains to the underlying dynamics.
Researcher Affiliation Academia 1Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Heidelberg University, Germany. 2Dept. of Machine Learning, Technical University Berlin, Berlin, Germany. 3Clinic for Psychiatry and Psychotherapy, Central Institute of Mental Health, Mannheim, Germany. 4Faculty of Physics and Astronomy, Heidelberg University, Germany.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes All code produced here is available at github.com/Durstewitz Lab/mm PLRNN.
Open Datasets Yes We chose a data set consisting of f MRI recordings ... The details of the experimental setup are not overly important here and are given in Koppe et al. (2014) and briefly summarized in Appx. B.3.
Dataset Splits Yes Specifically, we ran a cross-validation protocol where each 20% segment of the time series was left out in turn for training, and unseen class labels were predicted on these left-out test sets (see Appx. B.3 for details).
Hardware Specification Yes All experiments were run on CPU-based servers (Intel Xeon Plat 8160 @ 2.1GHz with 24 cores or Intel Xeon Gold 6148 @ 2.4GHz with 20 cores)
Software Dependencies No The paper mentions software like 'Python/Py Torch', 'Mat Lab (Math Works Inc.)', and 'scikit-learn library' but does not provide specific version numbers for these dependencies.
Experiment Setup Yes Detailed information on hyper-parameter settings for all methods and experiments is collected in Appx. A.4. ... M = 15 was selected as it led to the best state space reconstructions ... a grid search was performed to determine the optimal Newton-Raphson learning rates in Eqn. 17 and 18 across the ranges αz = {0.6, 0.8, 1} (E-step) and αβ = {0.0005, 0.001, 0.005, 0.01} (M-step), respectively.