Stiffness-aware neural network for learning Hamiltonian systems

Authors: SENWEI Liang, Zhongzhan Huang, Hong Zhang

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate SANN on complex physical systems including a three-body problem and billiard model. We show that SANN is more stable and can better preserve energy when compared with the state-of-the-art methods, leading to significant improvement in accuracy. and 4 EXPERIMENTS To evaluate the performance of SANN, we use two complex Hamiltonian systems: the billiard model and the three-body problem.
Researcher Affiliation Collaboration Senwei Liang Purdue University liang339@purdue.edu Zhongzhan Huang Sun Yat-sen University huangzhzh23@ mail2.sysu.edu.cn Hong Zhang Argonne National Laboratory hongzhang@anl.gov
Pseudocode No The paper does not contain structured pseudocode or a clearly labeled algorithm block for the proposed method.
Open Source Code No The paper mentions 'dynamic graphs on website' but does not explicitly state that the source code for the described methodology is released or provide a link to a code repository.
Open Datasets No The paper describes how the training data was simulated ('We simulate the training set with 100 trajectories and the testing set with 30 trajectories by RKF45' and 'The training data comprises 1,000 trajectories'), but does not provide concrete access information (link, DOI, specific citation with author/year for a public dataset) for the datasets used.
Dataset Splits No The paper mentions 'validation dataset' and 'validation loss' in the context of hyperparameter selection in Appendix G, but does not provide specific details on the dataset split for validation (e.g., percentages or sample counts).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions software components like 'Adam' and 'Leapfrog solver' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We set the stiff ratio γ to be 10% and the partition S to be 10 for the stiff interval and 2 for the nonstiff interval. The loss function (7) is optimized by Adam with a batch size of 1,024, and we use an initial learning rate of 0.001 for 500 epochs. The learning rate follows cosine decay with the increasing training epoch.