A Framework for Bayesian Optimization in Embedded Subspaces
Authors: Amin Nayebi, Alexander Munteanu, Matthias Poloczek
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present a theoretically founded approach for high-dimensional Bayesian optimization based on low-dimensional subspace embeddings. [...] Moreover, we provide an efficient implementation based on hashing and demonstrate empirically that this subspace embedding achieves considerably better results than the previously proposed methods for highdimensional BO based on Gaussian matrix projections and structure-learning. [...] An experimental evaluation that demonstrates state-of-the-art performance when the proposed embedding is combined with a low-dimensional BO algorithm, e.g., Knowledge Gradient (KG) (Frazier et al., 2009) or BLOSSOM (BM) (Mc Leod et al., 2018). |
| Researcher Affiliation | Collaboration | 1Dortmund Data Science Center, Faculties of Statistics and Computer Science, TU Dortmund, Dortmund, Germany 2Department of Systems and Industrial Engineering, University of Arizona, Tucson, AZ, USA 3Uber AI Labs, San Francisco, CA, USA. Correspondence to: Matthias Poloczek <poloczek@uber.com>. |
| Pseudocode | Yes | Algorithm 1 The Generic BO Algorithm with the probabilistic subspace embedding, Algorithm 2 Construction of an inverse subspace embedding S implicitly given by (h, σ), see details in the text., Algorithm 3 Computes S y X via the inverse subspace embedding S implicitly given by (h, σ) |
| Open Source Code | Yes | We implemented He SBO-EI and all REMBO variants in Python 3 using GPy. The code for the embedding is available at github.com/aminnayebi/HesBO. |
| Open Datasets | Yes | We compared the algorithms on the following test functions: (1) Branin, (2) Hartmann-6, (3) Rosenbrock, and (4) Styblinski-Tang (Styb Tang) with input dimension D {25, 100, 1000}. [...] The goal is to choose the weights between the hidden layer and the outputs in order to minimize the loss on the MNIST data set (Le Cun et al., 2017). |
| Dataset Splits | No | No specific dataset split information (percentages, counts, or explicit methodology for splitting into train/validation/test sets) was provided in the paper's main text. |
| Hardware Specification | No | No specific hardware details (like exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running experiments were provided, only that 'all experiments were performed on dedicated machines with identical resources'. |
| Software Dependencies | No | The paper states 'We implemented He SBO-EI and all REMBO variants in Python 3 using GPy.' However, specific version numbers for GPy or other key software components are not provided. |
| Experiment Setup | No | No specific experimental setup details such as concrete hyperparameter values or training configurations are explicitly provided in the main text. The paper mentions 'See Sect. D for more details' regarding experimental setup, implying details are in the supplement, not the main body. |