Out-of-Domain Generalization in Dynamical Systems Reconstruction

Authors: Niclas Alexander Göring, Florian Hess, Manuel Brenner, Zahra Monfared, Daniel Durstewitz

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also show this empirically, considering major classes of DSR algorithms proposed so far, and illustrate where and why they fail to generalize across the whole state space.
Researcher Affiliation Academia 1Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany 2Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany 3Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany.
Pseudocode No The paper does not contain any clearly labeled "Pseudocode" or "Algorithm" blocks, nor does it present structured steps formatted like code.
Open Source Code Yes All code used here is available at https://github. com/Durstewitz Lab/OODG-in-DSR.
Open Datasets No The paper describes generating synthetic datasets from Duffing and Lorenz-like systems, but does not provide direct links, DOIs, or citations to publicly available, pre-generated datasets for these experiments. It only provides code to generate them.
Dataset Splits No The paper describes training parameters and evaluation grids for initial conditions but does not explicitly provide percentages or counts for training, validation, and test dataset splits.
Hardware Specification No The paper does not explicitly describe the specific hardware used for running its experiments, such as GPU/CPU models, processor types, or memory amounts.
Software Dependencies No The paper mentions software like 'Julia library Differential Equations.jl', 'Flux.jl DL stack', and 'Py SINDy' with citations, but does not provide specific version numbers for these or other ancillary software components used in the experiments.
Experiment Setup Yes Detailed hyperparameter settings are collected in Table A2. Hyperparameter settings of sh PLRNNs trained on the Duffing and Lorenz-like systems.