Automatic Construction of Parallel Portfolios via Explicit Instance Grouping

Authors: Shengcai Liu, Ke Tang, Xin Yao1560-1567

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on two widely studied problem domains, the boolean satisfiability problems (SAT) and the traveling salesman problems (TSP), showed that the parallel portfolios constructed by the proposed method could achieve consistently superior performances to the ones constructed by the state-of-the-art ACPP methods, and could even rival sophisticated hand-designed parallel solvers.
Researcher Affiliation Academia 1 School of Computer Science and Technology, University of Science and Technology of China, Hefei 230027, China 2 University Key Laboratory of Evolving Intelligent Systems of Guangdong Province, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
Pseudocode Yes The pseudo-code of PCIT is given in Algorithm 1.
Open Source Code No The paper does not explicitly state that the source code for the proposed PCIT method is openly available, nor does it provide a link to a code repository.
Open Datasets Yes Specifically, we used PCIT to build parallel portfolios based on a training set, and then compared them with the ones constructed by the existing methods, on an unseen test set. Instance set obtained from the application track of the SAT 12 Challenge were randomly and evenly split into a training set and a test set, and to ensure the computational costs for portfolio construction would not be prohibitively large, the cutoff time used in training (180s) was smaller than the one used in testing (900s, same as the SAT 12 challenge); In SAT-Single, we chose instances from the benchmark used in the agile track of the SAT 16 Competition for its moderate cutoff time (60s). Specifically, we randomly selected 2000 instances from the original benchmark (containing 5000 instances) and divided them evenly for training and testing. For TSP-Single and TSP-Multi we used the same instance sets. Specifically, we used the portgen and the portcgen generators from the 8th DIMACS Implementation Challenge to generate 1000 uniform instances (in which the cities are randomly distributed) and 1000 clustering ones (in which the cities are distributed around different central points). The problem sizes (the number of the cities) of all these generated instances are within [1500, 2500]. Once again, we divided them evenly for training and testing.
Dataset Splits No The paper states that instance sets were "randomly and evenly split into a training set and a test set" or "divided them evenly for training and testing" across all scenarios. While validation is mentioned (e.g., 'Validate each of P1, ..., Prpc on I'), there is no explicit mention of a separate validation dataset split with specific percentages or counts.
Hardware Specification Yes All the experiments were conducted on a cluster of 5 Intel Xeon machines with 60 GB RAM and 6 cores each (2.20 GHz, 15 MB Cache), running Centos 7.5.
Software Dependencies Yes SMAC version 2.10.03 (Hutter, Hoos, and Leyton-Brown 2011) was used as the AC procedure. All the experiments were conducted on a cluster of 5 Intel Xeon machines with 60 GB RAM and 6 cores each (2.20 GHz, 15 MB Cache), running Centos 7.5.
Experiment Setup Yes We set the number of component solvers to 8 (same as (Lindauer et al. 2017)), since 8-core (and 8-thread) machines are widely available now. The optimization goal considered here is to minimize the time required by a solver to solve the problem (for SAT) or to find the optimum of the problem (for TSP). In particular, we set the performance metric to Penalized Average Runtime 10 (PAR-10) (Hutter et al. 2009), which counts each timeout as 10 times the given cutoff time. For all considered ACPP methods here, SMAC version 2.10.03 (Hutter, Hoos, and Leyton-Brown 2011) was used as the AC procedure. Since the performance of SMAC could be often improved when used with the instance features, we gave SMAC access to the 126 SAT features used in (Hutter, Hoos, and Leyton-Brown 2011), and the 114 TSP features used in (Kotthoff et al. 2015). The same features were also used by PCIT (for transferring instances) and CLUSTERING (for clustering instances). To make the comparisons fair, the total CPU time consumed by each method was kept almost the same. The detailed setting of the time budget for each method is given in Table 2.