Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces
Authors: Johannes Kirschner, Mojmir Mutny, Nicole Hiller, Rasmus Ischebeck, Andreas Krause
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on multiple synthetic benchmarks, where we obtain competitive performance. Further, we deploy our algorithm to optimize the beam intensity of the Swiss Free Electron Laser with up to 40 parameters while satisfying safe operation constraints. |
| Researcher Affiliation | Academia | 1 Department of Computer Science, ETH Zurich, Switzerland 2 Paul Scherrer Institut, Switzerland. |
| Pseudocode | Yes | Algorithm 1 Line Bayesian Optimization (LINEBO) Require: Direction oracle Π, accuracy ϵ, starting point ˆx0, Model M0 = (GP prior for f, g) 1: for i = 1, 2, . . . , K do 2: li Π(Mi 1) // define direction 3: Li L(ˆxi 1, li) // define subspace 4: ˆxi, Mi Bayesian Optimization(Mi 1, Li, ϵ) // includes posterior updates (Appendix A) 5: end for |
| Open Source Code | No | The paper refers to third-party libraries like GPy but does not provide a link or explicit statement about releasing the source code for its own described methodology. |
| Open Datasets | Yes | As for standard benchmarks we use the Camelback (2d) and the Hartmann6 (6d) functions. Further, we use the Gaussian f(x) = exp( 4 x 2 2) in 10 dimensions as a benchmark where local convergence is sufficient; |
| Dataset Splits | No | The paper does not explicitly provide training, validation, or test dataset splits. It focuses on evaluating optimization performance over iterations on synthetic functions and a real-world system rather than using fixed data splits for model training. |
| Hardware Specification | No | The paper does not provide specific details about the computational hardware (e.g., CPU, GPU models, memory) used for running the experiments. It mentions the operation frequency of the Swiss FEL machine but not the computing resources. |
| Software Dependencies | No | The paper mentions using "public libraries" and refers to "GPy: A gaussian process framework in python" in the references, but it does not specify exact version numbers for these or any other key software components used in the experiments. |
| Experiment Setup | No | The paper mentions adding "Gaussian noise with standard deviation 0.2" and randomizing initial points. It states that hyperparameters were "manually chose[n] reasonable values" but does not provide specific values for these hyperparameters or detailed system-level training settings like learning rates, batch sizes, or optimizer configurations. |