Constrained Discrete Black-Box Optimization using Mixed-Integer Programming
Authors: Theodore P Papalexopoulos, Christian Tjandraatmadja, Ross Anderson, Juan Pablo Vielma, David Belanger
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test our approach on a range of unconstrained and constrained problems, including DNA binding, constrained binary quadratic problems from the MINLPLib benchmark, and the NAS-Bench-101 neural architecture search benchmark. NN+MILP surpasses or matches the performance of black-box algorithms tailored to the constraints at hand |
| Researcher Affiliation | Collaboration | 1Operations Research Center, Massachusetts Institute of Technology, Cambridge MA, USA 2Google Research, Cambridge MA, USA. |
| Pseudocode | Yes | Algorithm 1 MBO Input: hypothesis class F, budget N, initial dataset Dn = {xi, f(xi)}n i=1, optimization domain Ω for t = n + 1 to t = N do P( ˆft) fit(F, Dt 1) a(x) get acquisition function(P( ˆft)) xt inner loop solver(a(x), Ω) Dt Dt 1 {xt, f(xt)} end for return arg max(xt,yt) DN yt" and "Algorithm 2 NN+MILP Input: hypothesis class F, budget N, initial dataset Dn = {xi, f(xi)}n i=1, MILP domain formulation MΩ for t = n + 1 to t = N do ˆft fit(F, Dt 1) (3.2) Mt build milp( ˆft, MΩ, Dt 1) (3.3) xt optimize(Mt) (generic MILP solver) Dt Dt 1 {xt, f(xt)} end for return arg max(xt,yt) DN yt |
| Open Source Code | No | The paper does not state that the code for their proposed method (NN+MILP) is open source or provide a link to it. It only references an open-source library (RBFOpt) used as a baseline. |
| Open Datasets | Yes | We test our approach on a range of unconstrained and constrained problems, including DNA binding, constrained binary quadratic problems from the MINLPLib benchmark, and the NAS-Bench-101 neural architecture search benchmark.", "Tf Bind Binding strength of a length-8 DNA sequence to a given transcription factor (Barrera et al., 2016).", "BBOB Non-linear function from the continuous Black Box Optimization Benchmarking library (Hansen et al., 2009)", "MINLPLib (Vigerske, 2021)", "NAS-Bench-101 (Ying et al., 2019) |
| Dataset Splits | Yes | Algorithms are evaluated in terms of the best reward observed after 1000 queries, averaged over 20 trials per problem. Therefore, to reduce variance when comparing algorithms, we initialize each of the 20 trials with a different fixed dataset of 50 random points.", "Each model is evaluated by an explained variance score using five-fold cross validation on the training set.", "The NAS-Bench-101 dataset contains pre-computed validation and test accuracies for three independently trained replications of each architecture, as well as the training time of each. |
| Hardware Specification | Yes | We use standard CPU machines with 1G RAM and 10 cores. |
| Software Dependencies | Yes | Models are trained with Tensor Flow (Abadi et al., 2016), using the ADAM optimizer.", "The MILP acquisition problem is solved with the Mixed-Integer Programming solver SCIP 7.0.1 (Gamrath et al., 2020) using default settings. |
| Experiment Setup | Yes | In all experiments, we fix the surrogate model hypothesis class F to networks with a single, fully-connected hidden layer of 16 neurons. Models are trained with Tensor Flow (Abadi et al., 2016), using the ADAM optimizer. No hyper-parameter tuning is performed across problems. The MILP acquisition problem is solved with the Mixed-Integer Programming solver SCIP 7.0.1 (Gamrath et al., 2020) using default settings. While the acquisition problem is typically solved to optimality in seconds (Section 4.4), we set a time limit of 500s as a safeguard. |