Bayesian Quadrature for Multiple Related Integrals
Authors: Xiaoyue Xi, Francois-Xavier Briol, Mark Girolami
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We then prove convergence rates for the method in the well-specified and misspecified cases, and demonstrate its efficiency in the context of multi-fidelity models for complex engineering systems and a problem of global illumination in computer graphics. |
| Researcher Affiliation | Academia | 1Department of Mathematics, Imperial College London 2Department of Statistics, University of Warwick 3The Alan Turing Institute for Data Science and AI. |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about open-source code availability or links to code repositories for the described methodology. |
| Open Datasets | No | The paper uses toy problems and functions defined within the paper (e.g., Step function, Forrester function) or refers to functions from a cited paper (Raissi & Karniadakis, 2016), but does not provide concrete access information (link, DOI, repository) to a publicly available dataset file. |
| Dataset Splits | No | The paper does not specify training, validation, or test dataset splits. It mentions using '20 equidistant points' and some of these points for high-fidelity model evaluation, which is a sampling strategy, not a standard dataset split. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not list any specific software dependencies with version numbers. |
| Experiment Setup | Yes | In both cases, 20 equidistant points are used, with point number 4, 10, 11, 14 and 17 used to evaluate the high fidelity model and the others used for the low fidelity model. The choice of kernel hyperparameters is made by maximising the marginal likelihood (often called empirical Bayes). |