Distributed Batch Gaussian Process Optimization
Authors: Erik A. Daxberger, Bryan Kian Hsiang Low
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluation on synthetic benchmark objective functions and a real-world optimization problem shows that DB-GP-UCB outperforms the stateof-the-art batch BO algorithms. |
| Researcher Affiliation | Academia | 1Ludwig-Maximilians-Universit at, Munich, Germany. A substantial part of this research was performed during his student exchange program at the National University of Singapore under the supervision of Bryan Kian Hsiang Low and culminated in his Bachelor s thesis. 2Department of Computer Science, National University of Singapore, Republic of Singapore. Correspondence to: Bryan Kian Hsiang Low <lowkh@comp.nus.edu.sg>. |
| Pseudocode | Yes | Pseudocode for DB-GP-UCB is provided in Appendix E. |
| Open Source Code | No | The paper states that implementations of *other* algorithms are publicly available and that their own algorithm was implemented in MATLAB using the GPML toolbox, but it does not state that their specific source code is publicly released or provide a link to it. |
| Open Datasets | Yes | synthetic benchmark objective functions such as Branin-Hoo (Lizotte, 2008) and g Sobol (Gonz alez et al., 2016) (Table 3 in Appendix H) and a real-world p H field of Broom s Barn farm (Webster & Oliver, 2007) (Fig. 3 in Appendix H) spatially distributed over a 1200 m by 680 m region discretized into a 31 18 grid of sampling locations. |
| Dataset Splits | No | The paper describes a fixed budget of function evaluations and random initialization but does not specify traditional training/validation/test dataset splits in percentages or counts. |
| Hardware Specification | Yes | All experiments are run on a Linux system with Intel Xeon E5-2670 at 2.6GHz with 96 GB memory. |
| Software Dependencies | No | The paper mentions implementing the algorithm in "MATLAB" and using the "GPML toolbox" but does not provide specific version numbers for either software dependency. |
| Experiment Setup | Yes | For each experiment, 5 noisy observations are randomly selected and used for initialization. For our experiments, we use a fixed budget of T|DT | = 64 function evaluations and analyze the trade-off between batch size |DT | (i.e., 2, 4, 8, 16) vs. time horizon T (respectively, 32, 16, 8, 4) on the performance of the tested algorithms. Our DB-GP-UCB algorithm uses the configurations of [N, B] = [4, 2], [8, 5], [16, 10] in the experiments with batch size |DT | = 4, 8, 16, respectively; in the case of |DT | = 2, we use our batch variant of GPUCB (2) which is equivalent to DB-GP-UCB when N = 1. |