Symplectic Neural Gaussian Processes for Meta-learning Hamiltonian Dynamics

Authors: Tomoharu Iwata, Yusuke Tanaka

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments, we demonstrate that the proposed method outperforms existing methods for predicting dynamics from a small number of observations in target systems.
Researcher Affiliation Industry Tomoharu Iwata , Yusuke Tanaka NTT Corporation {tomoharu.iwata,ysk.tanaka}@ntt.com
Pseudocode Yes Algorithm 1 Meta-learning procedure of our SNGP model.
Open Source Code No The paper does not provide a direct link or explicit statement about the availability of its source code.
Open Datasets No The paper describes generating data from six types of dynamical systems (mass-spring, pendulum, Duffing with and without friction) with randomly determined physical parameters and initial conditions, rather than using a pre-existing publicly available dataset with a specific link or citation.
Dataset Splits Yes For each type, five systems were used for meta-training, three for metavalidation, and six for meta-test.
Hardware Specification No The paper does not specify the hardware used for experiments (e.g., CPU, GPU models, memory).
Software Dependencies No The paper mentions 'Py Torch' and 'functorch' but does not specify their version numbers.
Experiment Setup Yes For obtaining system representation in Eq. (3), we used the bidirectional LSTM [Graves and Graves, 2012] for RNN with 32 hidden units, where the sequence of the states was used for input. For NNz and NNk, we used three-layered feedforward neural networks with 32 hidden and output units. For NNm, we used four-layered feed-forward neural networks with 32 hidden units. For the activation function, we used the hyperbolic tangent. We optimized our models using Adam [Kingma and Ba, 2015] with learning rate 10^-3, and batch dataset size four. The meta-validation datasets were used for early stopping, for which the maximum number of meta-training epochs was 5,000.