Knowing The What But Not The Where in Bayesian Optimization

Authors: Vu Nguyen, Michael A. Osborne

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our model using benchmark functions and tuning a deep reinforcement learning algorithm where we observe the optimum value in advance. These experiments demonstrate that our proposed framework works both intuitively better and experimentally outperforms the baselines. and 4. Experiments The main goal of our experiments is to show that we can effectively exploit the known optimum output to improve Bayesian optimization performance. We first demonstrate the efficiency of our model on benchmark functions. Then, we perform hyperparameter optimization for a XGBoost classification on Skin Segmentation dataset and a deep reinforcement learning task on Cart Pole problem in which the optimum values are publicly available.
Researcher Affiliation Academia 1University of Oxford, UK.
Pseudocode Yes Algorithm 1 BO with known optimum output.
Open Source Code Yes We provide additional experiments in the supplement and the code is released at.2 2github.com/ntienvu/Known_Optimum_BO
Open Datasets Yes XGBoost classification. We demonstrate a classification task using XGBoost (Chen & Guestrin, 2016) on a Skin Segmentation dataset4 where we know the best accuracy is f = 100, as shown in Table 1 of Le et al. (2016). and 4https://archive.ics.uci.edu/ml/datasets/skin+segmentation
Dataset Splits No The paper mentions: 'The Skin Segmentation dataset is split into 15% for training and 85% for testing for a classification problem.' However, it does not explicitly mention a validation split or specific details about how validation data was used for hyperparameter tuning or early stopping.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. It only implies that computational resources were used for tasks like deep reinforcement learning.
Software Dependencies No All implementations are in Python. ... XGBoost classification. ...GP-UCB (Srinivas et al., 2010) and EI (Mockus et al., 1978). The paper mentions software and libraries but does not provide specific version numbers for any of them.
Experiment Setup Yes Table 1. Hyperparameters for XGBoost. Known f = 100 (Accuracy) Variables Min Max Found x min child weight 1 20 4.66 colsample bytree 0.1 1 0.99 max depth 5 15 9.71 subsample 0.5 1 0.77 alpha 0 10 0.82 gamma 0 10 0.51 and In particular, we use the advantage actor critic (A2C) (Sutton & Barto, 1998) which possesses three sensitive hyperparameters, including the discount factor γ, the learning rate for actor model, α1, and the learning rate for critic model, α2.