Physics and Lie symmetry informed Gaussian processes
Authors: David Dalton, Dirk Husmeier, Hao Gao
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that the use of symmetry constraints improves the performance of the GP for both forward and inverse problems, and that our approach offers competitive performance with neural networks in the low-data environment. |
| Researcher Affiliation | Academia | 1School of Mathematics and Statistics, University of Glasgow, United Kingdom. Correspondence to: David Dalton <david.dalton@glasgow.ac.uk>. |
| Pseudocode | No | No pseudocode or clearly labeled algorithm blocks were found in the paper. |
| Open Source Code | Yes | Code and data are available at github.com/dodaltuin/jaxpigp/tree/main/examples/PSGPs. |
| Open Datasets | No | The paper describes generating datasets based on PDEs and resampling, but does not provide concrete access information (link, DOI, formal citation) to a publicly available or open dataset that was used for training. |
| Dataset Splits | No | The paper does not explicitly provide details about a validation dataset split (e.g., percentages, sample counts, or predefined splits). It mentions 'test set results' but no specific validation split. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts, or cloud instance types) used for running experiments were mentioned in the paper. |
| Software Dependencies | No | The paper mentions 'Python using JAX' but does not specify version numbers for Python, JAX, or any other software libraries or dependencies. 'Adam optimiser' is mentioned, but it's an algorithm, not a versioned software component. |
| Experiment Setup | Yes | For the GP models, kuu was specified to be the rational quadratic kernel. We experimented with different neural network architectures (using tanh activation function), and found that four hidden layers each of width 20 yielded the best accuracy. Each model was trained using the Adam optimiser with exponentially decaying learning rate (Kingma & Ba, 2017). As suggested in (Long et al., 2022), we use the whitening trick (Murray & Adams, 2010) when evaluating the ELBO (Eq. (21)), to improve training efficiency. Noise levels were set to 1% in each case see Appendix B for results under different levels of noise. |