Bayesian Optimization of Combinatorial Structures
Authors: Ricardo Baptista, Matthias Poloczek
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental evaluations demonstrate that this algorithm consistently outperforms other methods from combinatorial and Bayesian optimization. |
| Researcher Affiliation | Academia | 1Center for Computational Engineering, Massachusetts Institute of Technology, Cambridge, MA 2Dept. of Systems and Industrial Engineering, The University of Arizona, Tucson, AZ. |
| Pseudocode | Yes | BOCS is summarized as Algorithm 1. Algorithm 1 Bayesian Optimization of Combinatorial Structures |
| Open Source Code | Yes | The code for this paper is available at https://github. com/baptistar/BOCS. |
| Open Datasets | Yes | We conduct experiments on the following benchmarks: (1) binary quadratic programming with d = 10 variables (Sect. 4.1), (2) sparsification of Ising models with d = 24 edges (Sect. 4.2), (3) contamination control of a food supply chain with d = 25 stages (Sect. 4.3), (4) complexity reduction of an aero-structural multi-component system with d = 21 coupling variables (Sect. 4.4). We consider zero-field Ising models that admit a distribution p(z) = 1 Zp exp(z T Jpz) for z { 1, 1}n, where Jp Rn n is a symmetric interaction matrix and Zp is the partition function. The contamination control problem (Hu et al., 2010) considers a food supply with d stages that may be contaminated with pathogenic microorganisms. We study the aero-structural problem of Jasa et al. (2018). |
| Dataset Splits | No | The paper does not provide specific training, validation, and test dataset splits. It describes initial sample sizes for its optimization problem. |
| Hardware Specification | No | The paper does not specify the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions that code is available and refers to the use of an SDP solver, but it does not list specific software dependencies with version numbers. |
| Experiment Setup | Yes | We set d = 10, sampled 50 independent realizations for Q, and ran every algorithm 10-times on each instance with different initial datasets. Bayesian optimization algorithms received identical initial datasets of size N0 = 20. The experimental setup consists of 4 4 zero-field Ising models with grid graphs, i.e., n = 16 nodes and d = 24 edges. The initial dataset contains N0 = 20 points. We have d = 25 stages. We set T = 102, hence the objective requires T simulations of the random process and is expensive to evaluate. The ℓ1 regularization term encourages the prevention efforts to occur at a small number of stages. Fig. 6 shows the average performances for λ = 10 2. |