Max-value Entropy Search for Multi-Objective Bayesian Optimization
Authors: Syrine Belakaria, Aryan Deshwal, Janardhan Rao Doppa
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on several synthetic and real-world benchmark problems show that MESMO consistently outperforms the state-of-the-art algorithms. |
| Researcher Affiliation | Academia | Syrine Belakaria, Aryan Deshwal, Janardhan Rao Doppa School of EECS, Washington State University {syrine.belakaria, aryan.deshwal, jana.doppa}@wsu.edu |
| Pseudocode | Yes | Algorithm 1 MESMO Algorithm |
| Open Source Code | No | The paper mentions using and referencing code for baselines ('Spearmint', 'Py GMO library') but does not provide an explicit statement or link for the open-sourcing of their own MESMO implementation. |
| Open Datasets | Yes | We optimize a dense neural network over the MNIST dataset [13]. (...) We employed four real-world benchmarks with data available at [31, 21]. (...) We also employ two benchmarks from the general multi-objective optimization literature [16, 4]. |
| Dataset Splits | Yes | We employ 10K instances for validation and 50K instances for training. |
| Hardware Specification | Yes | We run all experiments on a machine with the following configuration: Intel i7-7700K CPU @ 4.20GHz with 8 cores and 32 GB memory. |
| Software Dependencies | No | The paper mentions software by name ('Spearmint', 'Py GMO library') and provides links, but does not specify version numbers for these or other software components (e.g., Python, PyTorch/TensorFlow versions). |
| Experiment Setup | Yes | The hyper-parameters are estimated after every 5 function evaluations. We initialize the GP models for all functions by sampling initial points at random from a Sobol grid. (...) We train the network for 100 epochs for evaluating each candidate hyper-parameter values on validation set. |