When are dynamical systems learned from time series data statistically accurate?
Authors: Jeongjin Park, Nicole Yang, Nisha Chandramoorthy
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We verify our results on a number of ergodic chaotic systems and neural network parameterizations, including MLPs, Res Nets, Fourier Neural layers, and RNNs. |
| Researcher Affiliation | Academia | Jeongjin (Jayjay) Park School of Computational Science and Engineering Georgia Institute of Technology Atlanta, GA 30332 jpark3141@gatech.edu Nicole Tianjiao Yang Department of Mathematics Emory University Atlanta, GA 30322 tianjiao.yang@emory.edu Nisha Chandramoorthy Department of Statistics The University of Chicago Chicago, IL 60637 nishac@uchicago.edu |
| Pseudocode | No | The paper describes algorithms and methods but does not include explicit pseudocode blocks or algorithm listings. |
| Open Source Code | Yes | The Python code is available at https://github.com/ni-sha-c/stacNODE. |
| Open Datasets | Yes | The first 10,000 data points are used as the training data and the last 8,000 points are the test data. |
| Dataset Splits | No | The paper mentions training data and test data, but does not explicitly state a separate validation set or how it was used for hyperparameter tuning if it existed. |
| Hardware Specification | Yes | Numerical experiments were conducted using Tesla A100 GPUs with 80GB and 40GB memory capacities. |
| Software Dependencies | No | The paper mentions the 'torchdiffeq3 library' and 'Py Torch library' for Adam W optimization, but does not specify version numbers for these software components. |
| Experiment Setup | Yes | We use the Runge-Kutta 4-stage time integrator with a time step size of 0.01 to define the map F. ... Our Neural ODE map, Fnn, is learned to approximate F by solving the above optimization with n = 10, 000 training points along an orbit. ... Table 2: Hyperparameter choices (Chaotic Systems, Epochs, Time step, Hidden layer width, Layers, Train, Test size, Neural Network, λ in (3)). |