Threshold Treewidth and Hypertree Width
Authors: Robert Ganian, Andre Schidler, Manuel Sorge, Stefan Szeider
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 6 Experiments In this section we present experimental results using the algorithms discussed in the previous section. We were particularly interested in the difference in loads between oblivious (Obl) and width-first load-second (W L) methods, and the trade-off between width-first (W L) and load-first (L W) methods. Setup. We ran our experiments on a cluster, where each node consists of two Xeon E5-2640 CPUs, each running 10 cores at 2.4 GHz and 160 GB memory. As solvers we used minisat 2.2.0 (http://minisat.se/) and optimathsat 1.6.2 (http://optimathsat.disi.unitn.it/). The control code and heuristics use Python 3.8.0. The nodes run Ubuntu 18.04. We used a 8 GB memory limit and a 2 hour time limit per instance. Plots. Most of our plots use a specific type of scatterplot. The position of the marker shows the pairs of values of the data point, while the size of the marker shows the number of instances for which these values were obtained. The measured quantities are noted in the plot caption. For example, the data points in Figure 1a are, for each of the solved instances, the pair of loads of the tree decompositions computed by the TW-X-W L and TW-X-Obl methods from Section 5. Instances. For threshold-d load-c treewidth we used 2788 instances from the twlib3 benchmark set. For generalized threshold-d load-c treewidth we used the 3071 hyperbench4 instances after removing self-loops and subsumed hyperedges. We created our loaded instances by marking a certain percentage of all vertices or hyperedges as heavy. We ran experiments for different ratios, but since the outcomes did not deviate too much, here we only present the results for a ratio of 30% heavy vertices/hyperedges (same as by Kask et al. [2011]). |
| Researcher Affiliation | Academia | TU Wien, Vienna, Austria University of Warsaw, Warsaw, Poland |
| Pseudocode | No | The paper describes various algorithms and heuristics (e.g., 'An optimal elimination ordering without taking heavy vertices into account for a given graph can be computed using a SAT encoding'), but it does not present them in a structured pseudocode block or a clearly labeled 'Algorithm' section. |
| Open Source Code | No | The paper mentions using third-party solvers ('minisat 2.2.0 (http://minisat.se/) and optimathsat 1.6.2 (http://optimathsat.disi.unitn.it/)') and programming language ('The control code and heuristics use Python 3.8.0'), but there is no statement about making their own implementation code open source or providing a link to it. |
| Open Datasets | Yes | For threshold-d load-c treewidth we used 2788 instances from the twlib3 benchmark set. For generalized threshold-d load-c treewidth we used the 3071 hyperbench4 instances after removing self-loops and subsumed hyperedges. 3http://www.staff.science.uu.nl/ bodla101/treewidthlib/ 4http://hyperbench.dbai.tuwien.ac.at/ |
| Dataset Splits | No | The paper mentions using '2788 instances from the twlib3 benchmark set' and '3071 hyperbench instances' and discusses creating 'loaded instances by marking a certain percentage of all vertices or hyperedges as heavy', but it does not provide specific train/validation/test splits, cross-validation details, or any other explicit dataset partitioning methodology. |
| Hardware Specification | Yes | We ran our experiments on a cluster, where each node consists of two Xeon E5-2640 CPUs, each running 10 cores at 2.4 GHz and 160 GB memory. |
| Software Dependencies | Yes | As solvers we used minisat 2.2.0 (http://minisat.se/) and optimathsat 1.6.2 (http://optimathsat.disi.unitn.it/). The control code and heuristics use Python 3.8.0. The nodes run Ubuntu 18.04. |
| Experiment Setup | Yes | We used a 8 GB memory limit and a 2 hour time limit per instance. |