Truncated Variance Reduction: A Unified Approach to Bayesian Optimization and Level-Set Estimation

Authors: Ilija Bogunovic, Jonathan Scarlett, Andreas Krause, Volkan Cevher

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of the algorithm on both synthetic and real-world data sets.
Researcher Affiliation Academia 1 Laboratory for Information and Inference Systems (LIONS), EPFL 2 Learning and Adaptive Systems Group, ETH Z urich
Pseudocode Yes Algorithm 1 Truncated Variance Reduction (TRUVAR) Algorithm 2 Parameter Updates for TRUVAR
Open Source Code No The paper refers to publicly available code for ES and MRS ([20] http://github.com/jmetzen/bayesian optimization), which are previous works, but does not provide concrete access to the source code for the TRUVAR algorithm developed in this paper.
Open Datasets Yes Lake Z urich [19], SVM on grid dataset, previously used in [21]
Dataset Splits No The paper discusses 'validation error' in the context of hyperparameter tuning data, but does not provide specific details on how the dataset was split into training, validation, and test sets (e.g., percentages, sample counts, or references to predefined splits).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks, or solvers with their respective versions) used for the experiments.
Experiment Setup Yes As with previous GP-based algorithms that use confidence bounds, our theoretical choice of β(i) in TRUVAR is typically overly conservative. Therefore, instead of using (14) directly, we use a more aggressive variant with similar dependence on the domain size and time: β(i) = a log(|D|t2(i)), where t(i) is the time at which the epoch starts, and a is a constant. Instead of the choice a = 2 dictated by (14), we set a = 0.5 for BO to avoid over-exploration. We found exploration to be slightly more beneficial for LSE, and hence set a = 1 for this setting. We found TRUVAR to be quite robust with respect to the choices of the remaining parameters, and simply set (1) = 1, r = 0.1, and δ = 0 in all experiments