Multi-objective Bayesian optimisation with preferences over objectives

Authors: Majid Abdolshah, Alistair Shilton, Santu Rana, Sunil Gupta, Svetha Venkatesh

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct a series of experiments to test the empirical performance of our proposed method MOBO-PC and compare with other strategies. These experiments including synthetic data as well as optimizing the hyper-parameters of a feed-forward neural network. For Gaussian process, we use maximum likelihood estimation for setting hyperparameters [21].
Researcher Affiliation Academia Majid Abdolshah, Alistair Shilton, Santu Rana, Sunil Gupta, Svetha Venkatesh The Applied Artificial Intelligence Institute (A2I2), Deakin University, Australia {majid,alistair.shilton,santu.rana,sunil.gupta,svetha.venkatesh} @deakin.edu.au
Pseudocode Yes Algorithm 1 Test if v S I. Algorithm 2 Preference-Order Constrained Bayesian Optimisation (MOBO-PC). Algorithm 3 Calculate Pr(x XI|D). Algorithm 4 Calculate a PEHI t (x|D).
Open Source Code No The paper does not contain any statement about releasing source code or providing a link to a code repository.
Open Datasets Yes We are using MNIST dataset and the tuning parameters include number of hidden layers (x1 [1, 3]), the number of hidden units per layer (x2 [50, 300]), the learning rate (x3 (0, 0.2]), amount of dropout (x4 [0.4, 0.8]), and the level of l1 (x5 (0, 0.1]) and l2 (x6 (0, 0.1]) regularization.
Dataset Splits No The paper mentions using the MNIST dataset but does not explicitly state the training, validation, or test dataset splits or percentages.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using Gaussian processes and discusses a neural network, but it does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes We are using MNIST dataset and the tuning parameters include number of hidden layers (x1 [1, 3]), the number of hidden units per layer (x2 [50, 300]), the learning rate (x3 (0, 0.2]), amount of dropout (x4 [0.4, 0.8]), and the level of l1 (x5 (0, 0.1]) and l2 (x6 (0, 0.1]) regularization.