Efficiently Explaining CSPs with Unsatisfiable Subset Optimization

Authors: Emilio Gamba, Bart Bogaerts, Tias Guns

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We now experimentally validate the performance of the different versions of our algorithm. Our benchmarks were run on a compute cluster, where each explanation sequence generation was assigned a single core on a 10-core INTEL Xeon Gold 61482 (Skylake) processor, a timelimit of 120 minutes and a memory-limit of 4GB.
Researcher Affiliation Academia 1Vrije Universiteit Brussel, Belgium 2KU Leuven, Belgium
Pseudocode Yes Algorithm 1: EXPLAIN-ONE-STEP(C, f, I, Iend)
Open Source Code Yes Everything was implemented in Python on top of Py SAT1 and is available at https://github. com/ML-KULeuven/ocus-explain.
Open Datasets Yes All of our experiments were run on a direct translation to Py SAT of the 10 puzzles of Bogaerts et al. [2020]2.
Dataset Splits No No explicit mention of specific train/validation/test dataset splits, percentages, or counts for reproduction.
Hardware Specification Yes Our benchmarks were run on a compute cluster, where each explanation sequence generation was assigned a single core on a 10-core INTEL Xeon Gold 61482 (Skylake) processor, a timelimit of 120 minutes and a memory-limit of 4GB.
Software Dependencies Yes For MIP calls, we used Gurobi 9.0, for SAT calls Mini Sat 2.2 and for Max SAT calls RC2 as bundled with Py SAT (version 0.1.6.dev11).
Experiment Setup Yes We used a cost of 60 for puzzle-agnostic constraints; 100 for puzzle-specific constraints; and cost 1 for facts.