Optimization, fast and slow: optimally switching between local and Bayesian optimization
Authors: Mark McLeod, Stephen Roberts, Michael A. Osborne
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We develop the first Bayesian Optimization algorithm, BLOSSOM, which selects between multiple alternative acquisition functions and traditional local optimization at each step. This is combined with a novel stopping condition based on expected regret. This pairing allows us to obtain the best characteristics of both local and Bayesian optimization, making efficient use of function evaluations while yielding superior convergence to the global minimum on a selection of optimization problems, and also halting optimization once a principled and intuitive stopping condition has been fulfilled. |
| Researcher Affiliation | Academia | 1Department of Engineering Science, University of Oxford 2Oxford-Man Institute of Quantitative Finance. Correspondence to: Mark Mc Leod <markm@robots.ox.ac.uk>. |
| Pseudocode | Yes | Algorithm 1 Positive Definite Test; Algorithm 2 Positive Definite Sphere Radius |
| Open Source Code | No | The paper does not provide any specific links or explicit statements about the release of its own source code for the methodology described. |
| Open Datasets | Yes | We now give results for several common test objectives for global optimization, illustrated in Figure 5. [...] We use BLOSSSOM to optimize the input and output scale hyperparameters of a Gaussian Process using 6 months of half hourly measurements of UK electricity demand during 2015 1. 1www2.nationalgrid.com/UK/Industryinformation/Electricity-transmission-operational-data/Dataexplorer |
| Dataset Splits | No | The paper does not explicitly state specific training, validation, and test dataset splits (e.g., percentages or sample counts) for the benchmark functions or the UK electricity demand data. |
| Hardware Specification | No | The paper does not specify any particular hardware used for running the experiments (e.g., GPU models, CPU types, or cloud resources). |
| Software Dependencies | No | The paper discusses algorithms and models (e.g., Gaussian Processes, BFGS) but does not provide specific version numbers for any ancillary software dependencies (e.g., programming languages, libraries, or frameworks). |
| Experiment Setup | Yes | We have selected a gradient estimate of less that 10 6 as our stopping condition, but any other method could be used. [...] For each algorithm we test multiple values of the stopping criteria, shown in the legend as appropriate. |