Structure-Preserving Physics-Informed Neural Networks with Energy or Lyapunov Structure

Authors: Haoyu Chu, Yuto Miyatake, Wenjun Cui, Shikui Wei, Daisuke Furihata

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that the proposed method improves the numerical accuracy of PINNs for partial differential equations (PDEs). Furthermore, the robustness of the model against adversarial perturbations in image data is enhanced.
Researcher Affiliation Academia Haoyu Chu1,2,3 , Yuto Miyatake4 , Wenjun Cui5 , Shikui Wei 1,3 and Daisuke Furihata4 1Institute of Information Science, Beijing Jiaotong University 2Graduate School of Information Science and Technology, Osaka University 3Beijing Key Laboratory of Advanced Information Science and Network Technology 4Cybermedia Center, Osaka University 5School of Computer and Information Technology, Beijing Jiaotong University {19112001, 19112048, shkwei}@bjtu.edu.cn, {miyatake, furihata}@cas.cmc.osaka-u.ac.jp
Pseudocode No The paper does not contain explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any links or explicit statements about releasing source code for the described methodology.
Open Datasets Yes We conduct a set of experiments on four datasets, MNIST [Le Cun et al., 1998], Street View House Numbers (SVHN) [Yuval, 2011] and CIFAR10/100 [Krizhevsky et al., 2009].
Dataset Splits No The paper does not explicitly specify validation dataset splits or methodology. It mentions training epochs but not how the data was split for validation.
Hardware Specification Yes All the experiments are run on a single NVIDIA A100 40GB GPU.
Software Dependencies Yes We use Py Torch [Paszke et al., 2017] framework for the implementation. The torch version is 1.11.0+cu113.
Experiment Setup Yes Regarding the training configurations, we first run the Adam algorithm [Kingma and Ba, 2014] with 10,000 epochs and then employ the L-BFGS algorithm [Liu and Nocedal, 1989]. In our experiments, we set all the hyperparameters λi as 1. For optimization, we use the Adam algorithm with the initial learning rate = 0.001 and a cosine annealing schedule. The training epochs for MNIST, SVHN, and CIFAR10/100 are set to 10, 40, and 60/70.