Risk-averse Heteroscedastic Bayesian Optimization
Authors: Anastasia Makarova, Ilnura Usmanova, Ilija Bogunovic, Andreas Krause
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we experimentally validate RAHBO on two synthetic examples and two real hyperparameter tuning tasks, and compare it with the baselines. |
| Researcher Affiliation | Academia | Anastasiia Makarova ETH Zürich anmakaro@ethz.ch Ilnura Usmanova ETH Zürich ilnurau@ethz.ch Ilija Bogunovic ETH Zürich ilijab@ethz.ch Andreas Krause ETH Zürich krausea@ethz.ch |
| Pseudocode | Yes | Algorithm 1 Risk-averse Heteroscedastic Bayesian Optimization (RAHBO) |
| Open Source Code | Yes | We provide an open-source implementation of our method.1 1https://github.com/Avidereta/risk-averse-hetero-bo |
| Open Datasets | Yes | We use real Swiss FEL measurements collected in [21] to train a neural network surrogate model... tune hyperparameters of a random forest classifier (RF) on a dataset of fraudulent credit card transactions [23]. |
| Dataset Splits | Yes | In k-fold cross-validation, the average metric over the validation sets is optimized a canonical example of the repeated experiment setting that we consider in the paper... We use the balanced accuracy score and 5 validation folds, i.e., k = 5 |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions software components like 'neural network surrogate model', 'GP model', 'random forest classifier', and 'Matérn 5/2 kernels', but does not specify version numbers for any libraries or frameworks. |
| Experiment Setup | Yes | We set λ = 1 and βt = 2... We initialize the algorithms by selecting 10 inputs x at random... We use k = 10 samples at each chosen xt. The number of acquisition rounds is T = 60... We use Matérn 5/2 kernels with Automatic Relevance Discovery (ARD) and normalize the input features to the unit cube. The number of acquisition rounds in one experiment is 50 and we repeat each experiment 15 times. |