Mercer Features for Efficient Combinatorial Bayesian Optimization
Authors: Aryan Deshwal, Syrine Belakaria, Janardhan Rao Doppa7210-7218
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on diverse real-world benchmarks demonstrate that Mer CBO performs similarly or better than prior methods. |
| Researcher Affiliation | Academia | Aryan Deshwal, Syrine Belakaria, Janardhan Rao Doppa School of EECS, Washington State University {aryan.deshwal, syrine.belakaria, jana.doppa}@wsu.edu |
| Pseudocode | Yes | Algorithm 1: Mer CBO Algorithm |
| Open Source Code | No | The paper only provides GitHub links for the baseline methods (COMBO, BOCS, SMAC), but does not state that the code for Mer CBO, the method described in this paper, is available or provide a link to it. |
| Open Datasets | Yes | Ising sparsification. The probability distribution p(z) (Baptista and Poloczek 2018; Oh et al. 2019) defined by Zerofield Ising model Ip is parametrized by a symmetric interaction matrix Jp whose support is represented as a graph Gp. and Low auto-correlation binary sequences (LABS). This problem has diverse applications in multiple fields (Bernasconi 1987; Packebusch and Mertens 2015)... and The goal is to find sequences that maximize the binding activity between a variety of human transcription factors and every possible length-8 DNA sequence (Barrera et al. 2016; Angermueller et al. 2020). |
| Dataset Splits | No | The paper mentions initializing a 'small-sized training set TRAIN' and discusses how the surrogate models were initialized for the UAV design task ('randomly selecting from worst (in terms of objective) 10% structures'), but it does not provide specific percentages or counts for training/validation/test splits across all experiments, nor does it refer to standard predefined splits. |
| Hardware Specification | Yes | We run both TS and EI experiments on a 32 core Intel(R) Core(TM) i9-7960X CPU @ 2.80GHz machine. |
| Software Dependencies | No | The paper describes the methods and baselines used but does not provide specific software dependency details like library names with version numbers (e.g., Python 3.x, PyTorch 1.x, scikit-learn 0.x). |
| Experiment Setup | No | The paper mentions that 'the priors for all GP hyper-parameters and their posterior computation were kept the same' and 'We ran five iterations of submodular relaxation approach for solving AFO problems', but it does not specify concrete hyperparameter values (e.g., specific learning rates, batch sizes, or number of epochs). |