Experimental Design for Optimization of Orthogonal Projection Pursuit Models

Authors: Mojmir Mutny, Johannes Kirschner, Andreas Krause10235-10242

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the algorithm numerically on synthetic as well as real-world optimization problems.
Researcher Affiliation Academia Mojm ır Mutn y ETH Zurich MOJMIR.MUTNY@INF.ETHZ.CH Johannes Kirschner ETH Zurich JKIRSCHNER@INF.ETHZ.CH Andreas Krause ETH Zurich KRAUSEA@ETHZ.CH
Pseudocode Yes Algorithm 1 Orthogonal PPR Bandit Algorithm Algorithm 2 Kernelized Thompson sampling Algorithm 3 Experimental Design for Hessian Estimation
Open Source Code No The paper does not provide an explicit statement or link to its open-source code for the described methodology.
Open Datasets Yes We validate our methods on standard benchmarks from the additive Bayesian optimization literature (Gardner et al. 2017). We first focus on an explanatory example in Figure 3a, where we optimize a two dimensional function. [...] We optimize a 5 dimensional function, which is a sum of polynomials of degree 4, where the polynomial kernel was used globally but due to sensitivity of misspecification (large Lipschitz constant), the squared exponential kernel was used along the coordinates. In the last benchmark problem (Figures 3b and 3c), which models the performance of a real-world electron laser machine
Dataset Splits No The paper references datasets and benchmarks but does not explicitly provide training/validation/test splits.
Hardware Specification No The paper does not explicitly describe the hardware used for its experiments.
Software Dependencies No The paper mentions 'pymanopt' but does not specify its version or other software dependencies with version numbers.
Experiment Setup Yes In practice, we specify the value of ϵ = 10 3 in the first phase of the algorithm, and we model TR separately as our analysis suggests larger (but not unreasonable) values for TR for short optimization horizons T.