Lie Point Symmetry and Physics-Informed Networks
Authors: Tara Akhound-Sadegh, Laurence Perreault-Levasseur, Johannes Brandstetter, Max Welling, Siamak Ravanbakhsh
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluations indicate that the inductive bias introduced by the Lie point symmetries of the PDEs greatly boosts the sample efficiency of PINNs. 4 Experiments |
| Researcher Affiliation | Collaboration | Tara Akhound-Sadegh School of Computer Science, Mc Gill University, Mila Quebec Artificial Intelligence Institute, Montreal, Quebec, Canada Laurence Perreault-Levasseur Université de Montréal, Montreal, Quebec, Canada Ciela Institute, Montreal, Quebec, Canada Mila Quebec Artificial Intelligence Institute, Montreal, Quebec, Canada Trottier Space Institute, Montreal, Quebec, Canada CCA, Flatiron Institute, New York, USA Perimeter Institute, Waterloo, Ontario, Canada Johannes Brandstetter Microsoft Research AI4Science, Amsterdam, Netherlands Max Welling University of Amsterdam, Amsterdam, Netherlands Siamak Ravanbakhsh School of Computer Science, Mc Gill University, Mila Quebec Artificial Intelligence Institute, Montreal, Quebec, Canada |
| Pseudocode | Yes | Algorithm 1 PINN with Lie Point Symmetry |
| Open Source Code | Yes | We will make the data and the code available on Git Hub. |
| Open Datasets | No | The paper states, 'We generate simulated solutions, which we use to test the models performance' and describes how initial conditions are generated using 'truncated Fourier series with coefficients Ak, lk, ϕk sampled randomly.' It does not use a pre-existing publicly available dataset with a direct link, DOI, or formal citation. |
| Dataset Splits | Yes | We also randomly sample 100 and 300 of these initial conditions and use them for the validation and test datasets respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, or detailed computer specifications used for running its experiments. |
| Software Dependencies | No | The paper mentions software components like 'MLPs', 'elu activation', and 'ADAM optimizer' but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We model the two networks, gθ1 and eθ2 in Eq. (9) with MLPs consisting of 7 hidden layers of width 100. ... We used ADAM optimizer with learning rate of 0.001 for the training and performed early stopping using the validation dataset. ... The specific coefficient values for the models trained with and without symmetry loss for the heat equation are in Table 3 and Table 4. |