Model-Based Genetic Algorithms for Algorithm Configuration
Authors: Carlos Ansotegui, Yuri Malitsky, Horst Samulowitz, Meinolf Sellmann, Kevin Tierney
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical results show that model-based genetic algorithms significantly improve our ability to effectively configure algorithms automatically. In this section we first conduct a series of experiments on small black box optimization problems to gain an initial understanding of whether genetic engineering is at all helpful within the context of genetic algorithms, and what percentage of offspring ought to be engineered to achieve the best overall results. |
| Researcher Affiliation | Collaboration | Carlos Ans otegui DIEI Univ. de Lleida, Spain carlos@diei.udl.es Yuri Malitsky, Horst Samulowitz, Meinolf Sellmann IBM Research, USA {ymalits,samulowitz,meinolf}@us.ibm.com Kevin Tierney DS&OR Lab, University of Paderborn, Germany tierney@dsor.de |
| Pseudocode | No | The core idea of genetic engineering offspring is very simple. Once two parents have been chosen to produce offspring together, we aim at finding the combination of the parents genes that results in the fittest offspring as predicted by our surrogate model. |
| Open Source Code | No | The paper does not provide any explicit statement or link for the source code. |
| Open Datasets | Yes | We assess the impact of genetic engineering and increased diversification on the performance of GGA by using it for globally optimizing ten 4 dimensional and ten 8 dimensional black box optimization (BBO) functions as described in [Hansen et al., 2009] from the COmparing Continuous Optimizers (COCO) software [Hansen et al., 2011]. We evaluate the solvers on a collection of industrial instances spanning the 2009 SAT Competition to the one in 2013. There are 727 instances total. For training, we evaluated each solver s default configurations on all instances to isolate the ones where the solver needed at least 30 seconds to solve an instance, but could also finish it in under 300 seconds. |
| Dataset Splits | No | For training, we evaluated each solver s default configurations on all instances to isolate the ones where the solver needed at least 30 seconds to solve an instance, but could also finish it in under 300 seconds. We ignore very easy instances as they can be handled by a pre-solver. We also ignore very hard instances because we want GGA to have a chance of solving an instance without wasting time on something it will never solve. From these subsets of instances we used 115 for training glucose, and 50 for training lingeling. Subsequently, all instances not used for training that could be solved by the default or any parametrization found by the various configurators, including both easy ones and those where the default parameters time out, were used for testing. |
| Hardware Specification | Yes | Both tuning and evaluation are performed with a timeout of 300 seconds on AMD Opteron 6134 processors with 64GB of RAM. |
| Software Dependencies | Yes | tuning of two highly sophisticated SAT solvers: glucose 4.0 [Audemard and Simon, 2009] and lingeling (version ayv-86bf266) [Biere, 2014]. |
| Experiment Setup | Yes | Using GGA the allotted training time was set to 48 hours, with the population size set to 100, the maximum number of generations to 50, and the tournament size to 8. |