Adjoint-aided inference of Gaussian process driven differential equations
Authors: Paterne GAHUNGU, Christopher Lanyon, Mauricio A Álvarez, Engineer Bainomugisha, Michael T Smith, Richard Wilkinson
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments and Table 2: The median MSE as a function of number of sensors and RFFs. |
| Researcher Affiliation | Academia | Paterne Gahungu Quantum Leap Africa African Institute for Mathematical Sciences, Rwanda paterne.gahungu@aims.ac.rw Christopher W. Lanyon Department of Computer Science University of Sheffield, UK c.w.lanyon@sheffield.ac.uk Mauricio A. Álvarez Department of Computer Science University of Manchester, UK Engineer Bainomugisha Department of Computer Science Makerere University, Uganda Michael T. Smith Department of Computer Science University of Sheffield, UK Richard D. Wilkinson School of Mathematical Sciences University of Nottingham, UK |
| Pseudocode | Yes | Algorithm 1 Computing the posterior distribution of q |
| Open Source Code | Yes | Software: The algorithm has been implemented (for both the ODE and PDE problems) in a Python module available at https://github.com/SheffieldML/advectionGP. |
| Open Datasets | No | To generate synthetic data, we simulate a realization f from the GP model, solve Eq. (15) for u(t), and then simulate n observations from Eq. (16) with T = 1, p2 = 0.5, p1 = 1, p0 = 5, t = T n , ti = i T n , adding zero-mean Gaussian noise with standard deviation σ = 0.1. |
| Dataset Splits | No | To investigate the effects of varying feature and sensor numbers we performed a posterior predictive check using held-out data and used Monte Carlo estimation to calculate the posterior predictive mean squared error (MSE). (While "held-out data" is mentioned, no specific split percentages, counts, or standard splitting methodology is provided.) |
| Hardware Specification | Yes | We compared the run time of the adjoint method to a basic MH MCMC algorithm (recorded on a laptop with 16GB RAM and an Intel i7-1065G7 CPU @ 1.50 GHz). |
| Software Dependencies | No | The algorithm has been implemented (for both the ODE and PDE problems) in a Python module available at https://github.com/SheffieldML/advectionGP. and The only tool we use is GPy Opt, which we cite. (No version numbers provided for Python or GPy Opt). |
| Experiment Setup | Yes | To generate synthetic data, we simulate a realization f from the GP model, solve Eq. (15) for u(t), and then simulate n observations from Eq. (16) with T = 1, p2 = 0.5, p1 = 1, p0 = 5, t = T n , ti = i T n , adding zero-mean Gaussian noise with standard deviation σ = 0.1. For simplicity, we solve the ODE with a simple forward Euler approximation, but higher order schemes can and should be used in real applications. We approximate the GP using Eq. (8), using 200 RFFs generated using Eq. (14) with λ = 0.6 and τ 2 = 4. and Data was simulated on the spatial domain X = [0, 10]2 for t [0, 10] by first randomly generating a forcing function f(x, t) (generated from a GP using an EQ kernel with λ = 2, τ 2 = 2), and then solving the forward problem (Eq. 17) to find u(x, t) using PDE parameters p1 = (0.4, 0.4) and p2 = 0.01. ... Zero-mean Gaussian distributed noise is added to the true sensor readings with standard deviation σ = 0.05 ... using just M = 10 RFFs. ... inference results (with M = 200) |