Sparse Symplectically Integrated Neural Networks

Authors: Daniel DiPietro, Shiying Xiong, Bo Zhu

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate SSINNs on four classical Hamiltonian dynamical problems: the Hénon-Heiles system, nonlinearly coupled oscillators, a multi-particle mass-spring system, and a pendulum system. Our results demonstrate promise in both system prediction and conservation of energy, often outperforming the current state-of-the-art black-box prediction techniques by an order of magnitude.
Researcher Affiliation Academia Daniel M. Di Pietro, Shiying Xiong, Bo Zhu Dartmouth College, Department of Computer Science {daniel.m.dipietro.22, shiying.xiong, bo.zhu}@dartmouth.edu
Pseudocode No The paper does not contain a clearly labeled section for pseudocode or algorithm blocks. Figure 1 illustrates the computational flow but is not presented as pseudocode.
Open Source Code Yes 1Our code is available at https://github.com/dandip/ssinn
Open Datasets No The paper describes generating custom datasets for its experiments (e.g., 'The Hénon-Heiles dataset consists of 5,000 points... We computed this dataset via Clean Numerical Simulation', 'we first generated a clean dataset of 500 randomly sampled state transitions'). It does not provide access information (links, DOIs, formal citations to publicly available versions) for these datasets.
Dataset Splits Yes All SSINNs converged to the governing Hamiltonian on the clean dataset, with the best-performing model achieving prediction error of 2 10 8 for computing t = 0 to t = 0.1 on a validation dataset of 100 points.
Hardware Specification Yes trained using ADAM for 5 epochs on an RTX 2080 Ti system [18].
Software Dependencies No The paper mentions using 'ADAM' as an optimizer but does not specify versions for any other software components, libraries, or programming languages (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup Yes Each model had an initial learning rate of 10 3 with decay and was trained using ADAM for 5 epochs... A regularization coefficient of 10 3 led to the best results. and we increased the initial learning rate to 10 2 and trained for 60 epochs... Additionally, the regularization coefficient was tuned to 8 10 3. and altered the regularization parameter to 4 10 4. Similarly, we increased the SRNN models to 1024 hidden nodes per layer. All models were trained for 30 epochs with an initial learning rate of 10 2.