High Dimensional Bayesian Optimization via Supervised Dimension Reduction
Authors: Miao Zhang, Huiqi Li, Steven Su
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on synthetic examples and two real applications demonstrate the superiority of our algorithms for high dimensional Bayesian optimization. |
| Researcher Affiliation | Academia | 1School of Information and Electronics, Beijing Institute of Technology, China 2Faculty of Engineering and Information Technology, University of Technology Sydney, Australia |
| Pseudocode | Yes | Algorithm 1 SIR-BO; Algorithm 2 KISIR-BO |
| Open Source Code | Yes | Source code and used data are available at https://github.com/Miao Zhang0525 |
| Open Datasets | Yes | We have conducted a set of experiments on two synthetic functions and two real datasets. Branin function [Lizotte, 2008]... Trimodel function... training a neural network for Boston dataset... controlling a three-link walk robot with 25 parameters [Westervelt et al., 2007]. |
| Dataset Splits | No | The paper mentions '500 evaluation budget' and '20 independent runs' but does not specify explicit training, validation, or test dataset splits (e.g., percentages or exact counts) needed for reproduction. |
| Hardware Specification | No | The paper describes the experimental setup and results but does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like 'Gaussian kernel', 'DIRECT', and 'CMA-ES', but it does not provide specific version numbers for these or any programming languages or libraries. |
| Experiment Setup | Yes | We adopt Gaussian kernel with lengthscale 0.1 for kernelizing input, and Gaussian kernel with adaptive lengthscale for Gaussian processes learned by maximizing marginal likelihood function through DIRECT, and optimize acquisition function by CMA-ES. We plot means with 1/4 standard errors across 20 independent runs. Simple regrets under 500 evaluation budget. |