Learning to Break Symmetries for Efficient Optimization in Answer Set Programming

Authors: Alice Tarzariol, Martin Gebser, Konstantin Schekotihin, Mark Law

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We prove the correctness of our method and evaluate it on real-world optimization problems from the domain of automated configuration. Our experiments show significant improvements of optimization performance due to the learned first-order constraints. ... We evaluate the devised learning framework for optimization problems on optimization versions of the combinatorial problems addressed in (Tarzariol, Gebser, and Schekotihin 2022; Tarzariol et al. 2022) as well as on the fastfood problem (Denecker et al. 2009). Our experiments were performed on an Intel i7-3930K machine under Linux (Debian GNU/Linux 11), running ILASP (v4.3.1) as ILP and CLINGO (v5.5.2) as ASP system. The benchmarks and settings for reproducing our experiments can be found in https://github.com/prosysscience/Symmetry Breaking with ILP/tree/optimization. ... Table 1: Runtime results for CLINGO on: (i) baseline encodings; and (ii) baseline encodings augmented with learned constraints. #o and #s indicate the number of instances finished with a proven or unproven optimal solution, respectively. min, max, avg, and stdev provide the minimal, maximal, and average runtime in seconds along with the standard deviation over instances counted in #o (finished with a proven optimal solution).
Researcher Affiliation Collaboration Alice Tarzariol1, Martin Gebser1,2, Konstantin Schekotihin1, Mark Law3 1 University of Klagenfurt, Klagenfurt, Austria 2 Graz University of Technology, Graz, Austria 3 ILASP Limited, Grantham, UK {alice.tarzariol, martin.gebser, konstantin.schekotihin}@aau.at, mark@ilasp.com
Pseudocode Yes Algorithm 1: Method full-SBCs to compute examples for an instance i S of the optimization problem modeled by P
Open Source Code Yes The benchmarks and settings for reproducing our experiments can be found in https://github.com/prosysscience/Symmetry Breaking with ILP/tree/optimization.
Open Datasets Yes The benchmarks and settings for reproducing our experiments can be found in https://github.com/prosysscience/Symmetry Breaking with ILP/tree/optimization. ... As training sets S for the double, doublev, and triple instance families of PUP, we used two small yet representative instances per family.
Dataset Splits No The paper mentions training and testing instances but does not specify a separate validation split or explicit percentages/counts for data partitioning.
Hardware Specification Yes Our experiments were performed on an Intel i7-3930K machine under Linux (Debian GNU/Linux 11)
Software Dependencies Yes running ILASP (v4.3.1) as ILP and CLINGO (v5.5.2) as ASP system.
Experiment Setup Yes a time limit of 300 seconds for each run. ... a time limit of one hour per run. ... we repeat each learning run 120 times with different seeds