Uncertainty-Aware Search Framework for Multi-Objective Bayesian Optimization
Authors: Syrine Belakaria, Aryan Deshwal, Nitthilan Kannappan Jayakodi, Janardhan Rao Doppa10044-10052
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on several synthetic and six diverse real-world benchmark problems show that USe MO consistently outperforms the state-of-the-art algorithms. |
| Researcher Affiliation | Academia | Syrine Belakaria, Aryan Deshwal, Nitthilan Kannappan Jayakodi, Janardhan Rao Doppa School of EECS, Washington State University {syrine.belakaria, aryan.deshwal, n.kannappanjayakodi, jana.doppa}@wsu.edu |
| Pseudocode | Yes | Algorithm 1 USe MO Framework |
| Open Source Code | No | The paper mentions using 'the code for these methods from the BO library Spearmint1. 1https://github.com/HIPS/Spearmint/tree/PESM' for existing methods, but it does not provide an explicit statement or link for the open-source code of their proposed USe MO framework. |
| Open Datasets | Yes | We optimize a dense neural network over the MNIST dataset (Le Cun et al. 1998)...SW-LLVM is a data set with 1024 compiler settings (Siegmund et al. 2012)...The data set SNW was first introduced by (Zuluaga, Milder, and P uschel 2012)...The design space of No C dataset (Almer, Topham, and Franke 2011)...The materials dataset SMA consists of 77 different design configu-rations of shape memory alloys (Gopakumar et al. 2018)...PEM is a materials dataset consisting of 704 configurations of Piezoelectric materials (Gopakumar et al. 2018). |
| Dataset Splits | Yes | We employ 10K instances for validation and 50K instances for training. |
| Hardware Specification | Yes | we run all experiments on a machine with the following configuration: Intel i7-7700K CPU @ 4.20GHz with 8 cores and 32 GB memory. |
| Software Dependencies | No | The paper mentions using 'GP based statistical model', 'NSGA-II algorithm', and the 'BO library Spearmint', but it does not provide specific version numbers for any of these software components. |
| Experiment Setup | Yes | The hyper-parameters are estimated after every 10 function evaluations...We initialize the GP models for all functions by sampling initial points at random from a Sobol grid using the in-built procedure in the Spearmint library...For NSGA-II, the most important parameter is the number of function calls...Therefore, we fixed it to 1500 for all our experiments...We train the network for 100 epochs for evaluating each candidate hyper-parameter values on validation set. |