Hyperverlet: A Symplectic Hypersolver for Hamiltonian Systems

Authors: Frederik Baymler Mathiesen, Bin Yang, Jilin Hu4575-4582

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on a spring-mass and a pendulum system justify the design choices and suggest that Hyperverlet outperforms both traditional solvers and hypersolvers. Experiment Setup We test the performance on the two classical systems, undamped spring-mass and pendulum. Experiment Results To measure the accuracy, we employ the traditional mean squared error (MSE) of the canonical coordinates z, i.e. both position and momentum, where the average is computed over the temporal and spatial axis. We report the mean and standard deviation of the MSE over trials as µ σ in Table 3 for both the spring-mass and pendulum systems. The number of trials are 100 for both systems. Ablation Study To study the impact of a symplectic corrector, the parameterization of the symplectic transformations, and the choice of activation function, we conduct an ablation study.
Researcher Affiliation Academia 1Delft University of Technology, the Netherlands 2Aalborg University, Denmark
Pseudocode No The paper describes mathematical equations and transformations for the Hyperverlet solver, but it does not include a distinct pseudocode block or algorithm.
Open Source Code Yes The method is implemented in Pytorch 1.9.0, and the code is available at https: //github.com/Zinoex/hyperverlet.
Open Datasets No To synthesize, the training and test data, we utilize a 4th-order Forest-Ruth symplectic solver (Forest and Ruth 1990) with small time steps to produce a high-precision dataset, which we coarsen by an integer factor to obtain the final dataset.
Dataset Splits No The paper mentions 'training and test data' and 'training procedure' but does not specify details about a separate validation split or how it's used.
Hardware Specification Yes Experiments are conducted on a Linux Manjaro desktop with an Intel i7-6700k processor and 16GB RAM. No GPU is used for the reported experiments.
Software Dependencies Yes The method is implemented in Pytorch 1.9.0
Experiment Setup Yes For the neural network of Hyper Euler, we use a fully connected neural network with 4 hidden layers of 16 neurons, sigmoid activation, and no activation on the output layer. The weights are initialized using kaiming normal initialization, and bias is initialized to all zeros. The Symp Net architecture consists of 4 alternating linear layers starting with an upper module, followed by one lower activation layer with tanh as the activation function. The structure is repeated with 4 linear layers and a upper activation layer. This 10 layer structure is repeated twice. We adopt the initialization procedure of Symp Net (Jin et al. 2020) where the weights are initialized randomly from the distribution N(0, 0.01), and bias is initialized to all zeros. For all trainable solvers, we employ a learning rate of 1e 3, an Adam optimizer with weight decay of 1e 2, and the loss is MSE of single-step predictions. Solvers are trained for 6c epochs where c denotes the integer coarsening factor relative to the high precision dataset. We train all solvers using a batch size of 100,000.