Preferential Bayesian Optimization

Authors: Javier González, Zhenwen Dai, Andreas Damianou, Neil D. Lawrence

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate the benefits of PBO in a variety of experiments, showing that PBO needs drastically fewer comparisons for finding the optimum.
Researcher Affiliation Collaboration 1Amazon Research Cambridge, UK 2University of Sheffield, UK.
Pseudocode Yes Algorithm 1 The PBO algorithm. Input: Dataset D0 = {[xi, x i], yi}N i=1 and number of remaining evaluations n, acquisition for duels α([x, x ]). for j = 0 to n do 1. Fit a GP with kernel k to Dj and learn πf,j(x). 2. Compute the acquisition for duels α. 3. Next duel: [xj+1, x j+1] = arg max α([x, x ]). 4. Run the duel [xj+1, x j+1] and obtain yj+1. 5. Augment Dj+1 = {Dj ([xj+1, x j+1], yj+1)}. end for Fit a GP with kernel k to Dn. Returns: Report the current Condorcet s winner x n.
Open Source Code No The paper does not provide any statement or link indicating the availability of open-source code for the methodology described.
Open Datasets Yes The explicit formulation of these objectives and the domains in which they are optimized are available as part of standard optimization benchmarks3. https://www.sfu.ca/ssurjano/optimisation.html
Dataset Splits No The paper describes a sequential optimization process where data is collected through duels, rather than using traditional pre-defined training, validation, and test dataset splits for model evaluation.
Hardware Specification No The paper does not specify any particular hardware (e.g., GPU models, CPU types, or cloud instances with specs) used for running its experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes The search of the optimum of the objectives is performed in a grid of size (33 per dimension for all cases), which has practical advantages: the integral in eq. (5) can easily be treated as a sum and, more importantly, we can compare PBO with bandit methods that are only defined in discrete domains. Each comparison starts with 5 initial (randomly selected) duels and a total budget of 200 duels are run, after which, the best location of the optimum should be reported.