Identifying Physical Law of Hamiltonian Systems via Meta-Learning

Authors: Seungjun Lee, Haesang Yang, Woojae Seong

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that a well meta-trained learner can identify the shared representation of the Hamiltonian by evaluating our method on several types of physical systems with various experimental settings.
Researcher Affiliation Academia Seungjun Lee, Haesang Yang, Woojae Seong Seoul National University {tl7qns7ch,coupon3,wseong}@snu.ac.kr
Pseudocode No The paper describes the algorithms and their steps but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes More results and video could be available at https://github.com/7tl7qns7ch/ Identifying-Physical-Law.
Open Datasets Yes Datasets. During the meta-training, we generate 10,000 tasks for meta-train sets of all systems. Each meta-train set consists of task-specific train set and test set given by 50 randomly sampled point states x and their time-derivatives x in phase space with task-specific physical parameters. The distributions of the sampled states and physical parameters for each physical process are described in Appendix A.1.
Dataset Splits Yes The observations of the system Ti can be split into Di = {Dtr i , Dte i }, where Dtr i and Dte i denote the task-specific train and test sets, respectively.
Hardware Specification No The paper does not specify any particular hardware (e.g., CPU, GPU models) used for running the experiments.
Software Dependencies No The paper mentions using the Adam optimizer but does not specify version numbers for any software libraries, frameworks, or programming languages used in the implementation.
Experiment Setup Yes For all tasks, we took the baseline model as fully connected neural networks with the size of state dimensions 64 Softplus 64 Softplus 64 Softplus state dimensions and the HNN model as fully connected neural networks with the size of state dimensions 64 Softplus 64 Softplus 64 Softplus 1 dimension. We searched the hyperparameters, exploring the size of hidden dimensions from {32, 64, 128, 256}, the number of layers from {1, 2, 3, 4}, and the activation function from {Sigmoid, Tanh, Relu, Softplus}. During meta-training or pretraining, we use the Adam optimizer (Kingma & Ba, 2015) on outer-loop with learning rate of 0.001 and use gradient descent on inner-loop with learning rate of 0.002. For all systems, we set the number of task batches of 10, inner gradient updates of 5, and episodes of outer loop of 100 for meta-optimization. During the meta-testing, we also use the Adam optimizer with a learning rate 0.002. There is no weight decay for all.