Human-AI Collaborative Bayesian Optimisation

Authors: Arun Kumar A V, Santu Rana, Alistair Shilton, Svetha Venkatesh

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the optimisation performance of our proposed methods on various synthetic functions and several real-world datasets. First, we validate our methods by demonstrating its sample efficiency in the global optimisation of synthetic benchmark functions. Next, we tune the hyperparameters of the Support Vector Machine (SVM) classifier operating on publicly available real-world datasets. We compare the performance against different variations of the BO algorithm. The empirical results demonstrate the superiority of our proposed framework.
Researcher Affiliation Academia Arun Kumar A V, Santu Rana, Alistair Shilton, Svetha Venkatesh Applied Artificial Intelligence Institute (A2I2), Deakin University Waurn Ponds, Geelong, Australia
Pseudocode Yes Algorithm 1 HAT Rectifying Recommendations (HAT-RR) Input: Initial observations Dt = {(x1:t , y1:t )}, Sampling budget T
Open Source Code Yes The code base used for the experiments mentioned above is available at https://github.com/mailtoarunkumarav/Human AITeaming.
Open Datasets Yes Next, we tune the hyperparameters of the Support Vector Machine (SVM) classifier operating on publicly available multi-dimensional real-world datasets. In our experiments, we consider multi-dimensional realworld datasets publicly available in the UCI repository [Dua and Graff, 2017].
Dataset Splits Yes The real-world datasets used are randomly split into a training set containing 80% of the total instances and a test set consisting of the remaining 20% of the total instances.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments within the provided text.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes In the aforesaid methods, we have used SE kernel (k SE) for fitting GP surrogate models. The lengthscale parameter of k SE is tuned in the interval [0.1, 1]. We standardise the function output, and thus use unit signal variance for k SE. We follow the guidelines mentioned in the Kernel Functional Optimisation (KFO) framework [Venkatesh et al., 2021] for the hyperparameter selection of both the SE kernel (k SE) and hyperkernel (κ) used in the KFO framework. The hyperparameters of k SE i.e., the lengthscale (l) and the signal variance (σ2 f) are tuned in the interval [0.1, 1], as the bounds are always normalised. The hyperparameters λh and l of the hyperkernel κ used in the KFO framework are tuned in the interval (0, 1] and (0, 1], respectively. We use t = d + 2 initial observations for a d dimensional problem. The threshold ( ) used in Algorithm 2 is set at 95%.