Kernelized Normalizing Constant Estimation: Bridging Bayesian Quadrature and Bayesian Optimization

Authors: Xu Cai, Jonathan Scarlett

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our findings are supported by both algorithm-independent lower bounds and algorithmic upper bounds, as well as simulation studies conducted on a variety of benchmark functions.
Researcher Affiliation Academia 1 Department of Computer Science, National University of Singapore 2 Department of Mathematics, Institute of Data Science, National University of Singapore
Pseudocode Yes Algorithm 1: Two-batch normalizing constant estimation algorithm
Open Source Code No The paper does not provide any explicit statements or links to open-source code for the described methodology.
Open Datasets Yes Benchmark functions. Exact formulations of functions including, Ackley, Alpine, Product-Peak, Zhou, etc., can be found in (Bingham 2013). Virtual Library of Simulation Experiments: Test Functions and Datasets. https://www.sfu.ca/~ssurjano/index.html. Accessed: 2023-08-05.
Dataset Splits No The paper mentions allocating samples for different batches (e.g., 'T/2 samples') and discusses time horizons, but it does not specify explicit training, validation, or test dataset splits in terms of percentages or sample counts for the data used in experiments.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments.
Software Dependencies No The paper mentions 'built-in Sci Py optimizer based on L-BFGS-B' and 'Langevin Monte Carlo (LMC)' but does not specify version numbers for these or any other software components or libraries.
Experiment Setup Yes For all functions considered in this section, we consider a time horizon of T = 256, λ 2 {0.5, 5, 10}, σ 2 {0, 0.01, 0.1} and 2 {0.5, 1.5, 2.5}. The total number of steps of (6) is set as 20, and the LMC learning rate is β = 10 3.