Automating Bayesian optimization with Bayesian optimization
Authors: Gustavo Malkomes, Roman Garnett
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we demonstrate empirically that our approach is consistently competitive with or outperforms other strong baselines across several domains: benchmark functions for global optimization functions, hyperparameter tuning of machine learning algorithms, reinforcement learning for robotics, and determining cosmological parameters of a physical model of the Universe. |
| Researcher Affiliation | Academia | Gustavo Malkomes, Roman Garnett Department of Computer Science and Engineering Washington University in St. Louis St. Louis, MO 63130 {luizgustavo, garnett}@wustl.edu |
| Pseudocode | Yes | Algorithm 1 Automated Bayesian Optimization |
| Open Source Code | Yes | Our code and data will be available online: https://github.com/gustavomalkomes/abo. |
| Open Datasets | Yes | Benchmark functions for global optimization. Our first set of experiments are test functions commonly used as benchmarks for optimization [20]. ... [20] Sonja Surjanovic and Derek Bingham. Optimization test functions and datasets, 2017. URL http://www.sfu.ca/~ssurjano/optimization.html. |
| Dataset Splits | No | The paper describes repeating experiments and using initial random examples, but it does not specify explicit train/validation/test dataset splits with percentages or sample counts for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'MATLAB fmincon' but does not provide specific version numbers for this or any other software dependencies needed to replicate the experiment. |
| Experiment Setup | Yes | All multiple models strategies (BOM, MCMC and ABO) start with the same selection of models (See Section 4) and they aim to maximize the model-marginalized expected improvement (8). Both adaptive algorithms (ABO and MCMC) are allowed to perform five model evidence computations before each function evaluation; ABO queries five new models and MCMC performs five new proposals. In our experiments, we limited the number of models to 50, always keeping those with the higher model evidence. All methods used L-BFGS to optimize each model s hyperparameters. To avoid bad local minima, we perform two restarts, each begining from a sample of p(θ | M). |