Learning Chaotic Dynamics in Dissipative Systems

Authors: Zongyi Li, Miguel Liu-Schiaffini, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on the finite-dimensional, chaotic Lorenz-63 system as well as the chaotic 1D Kuramoto-Sivashinsky and 2D Navier-Stokes equations. In all cases we show that encouraging dissipativity is crucial for capturing the global attractor and evaluating statistics of the invariant measure.
Researcher Affiliation Collaboration Caltech zongyili@caltech.edu Miguel Liu-Schiaffini Caltech mliuschi@caltech.edu Nikola Kovachki NVIDIA nkovachki@nvidia.com Burigede Liu University of Cambridge bl377@eng.cam.ac.uk Kamyar Azizzadenesheli NVIDIA kamyara@nvidia.com Kaushik Bhattacharya Caltech bhatta@caltech.edu Andrew Stuart Caltech astuart@caltech.edu Anima Anandkumar Caltech anima@caltech.edu
Pseudocode No No pseudocode or clearly labeled algorithm block is present in the paper.
Open Source Code Yes The code is available at https://github.com/neural-operator/markov_neural_operator
Open Datasets No The paper uses data generated from well-known chaotic systems (Lorenz-63, Kuramoto-Sivashinsky, Navier-Stokes/Kolmogorov Flow) and describes parameters for these systems, but it does not provide a link, DOI, or formal citation for the specific dataset (trajectories) they used for training, validation, or testing.
Dataset Splits No The paper does not explicitly provide specific training, validation, or test dataset split percentages or sample counts. It mentions training on a single trajectory for Lorenz-63 but not specific splits.
Hardware Specification No The paper does not explicitly describe the specific hardware used for running experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not provide specific version numbers for key software components or libraries (e.g., Python, PyTorch, TensorFlow, specific solvers) needed to reproduce the experiments.
Experiment Setup Yes We use the canonical parameters ( , b, r) = (10, 8/3, 28) [51]. Since the solution operator of the Lorenz-63 system is finite-dimensional, we learn it by training a feedforward neural network on a single trajectory with h = 0.05s on the Lorenz attractor. We encourage dissipativity during training with the criterion described in eq. (7), with λ = 0.5 and being a uniform probability distribution supported on a shell around the origin.