Black Box Probabilistic Numerics

Authors: Onur Teymur, Christopher Foley, Philip Breen, Toni Karvonen, Chris J. Oates

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Applications are presented for nonlinear ordinary and partial differential equations, as well as for eigenvalue problems a setting for which no probabilistic numerical methods have yet been developed. 4 Experimental Assessment This section reports a rigorous experimental assessment of BBPN. Firstly, Section 4.1 demonstrates that BBPN is competitive with existing PN methods in the context of ordinary differential equations (ODEs). This result is somewhat surprising, given the black box nature of BBPN compared to the bespoke nature of existing PN methods for ODEs. Secondly, in Section 4.2 we demonstrate the versatility of BBPN by applying it to the nonlinear problem of eigenvalue computation, for which no PN methods currently exist. Finally, in Section 4.3 we use BBPN to provide uncertainty quantification for state-of-the-art numerical methods that aim to approximate the solution of nonlinear PDEs.
Researcher Affiliation Collaboration Onur Teymur University of Kent Alan Turing Institute Christopher N. Foley University of Cambridge Optima Partners Philip G. Breen Toni Karvonen University of Helsinki Alan Turing Institute Chris. J. Oates Newcastle University Alan Turing Institute
Pseudocode No No pseudocode or clearly labeled algorithm blocks were found in the paper.
Open Source Code Yes Software for BBPN, including code to reproduce the experiments in Section 4, can be downloaded from github.com/oteym/bbpn.
Open Datasets No The paper uses standard mathematical problems (Lotka Volterra IVP, QR algorithm, shifted power method, Kuramoto Sivashinsky equation) as test cases, and the "data" for BBPN are generated by running traditional numerical methods on these problems. It does not provide concrete access information (links, DOIs, formal citations) to publicly available datasets used as input to the numerical methods themselves.
Dataset Splits No The paper describes how data points are generated by running numerical methods at different resolutions and augmented cumulatively for training the GP model (e.g., 'The dataset is augmented cumulatively, so that for i = i0, all data generated by runs 1, . . . , i0 are used.'), but it does not specify explicit training, validation, and test splits for a given dataset to ensure reproducibility.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies Yes The Tensor toolbox for MATLAB, version 3.2.1
Experiment Setup Yes Default Settings: We use Matérn(1/2) kernels for φi and , i.e. φi(ti, t0i) = exp( kti t0ik/ t,i), and similarly mutatis mutandis for . These kernels impose a minimal continuity assumption on q without additional levels of smoothness being assumed. Sensitivity of results to the choice of kernel is investigated in Appendix C.2. For all experiments in this article, was set using the maximum likelihood estimator ML. ... To illustrate BBPN, our data consist of the final states produced by either an Euler (order 1) or an Adams Bashforth (order 2) algorithm, which were performed at different resolutions {hi = 2 i, i = 1 . . . , 6}. ... with minimum temporal step size h = δt and, for simplicity, a fixed spatial step size δx = 0.001 throughout. For the h = 0.002 simulation in Figure 4, we have hi 2 {0.002, 0.005, 0.01, 0.02, 0.05}, for the h = 0.005 simulation we have hi 2 {0.005, 0.01, 0.02, 0.05, 0.1}, and for the h = 0.01 simulation we have hi 2 {0.01, 0.02, 0.05, 0.1, 0.2}