Improving the Efficiency of Dynamic Programming on Tree Decompositions via Machine Learning
Authors: Michael Abseher, Frederico Dusberger, Nysret Musliu, Stefan Woltran
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We report on extensive experiments in different problem domains which show a significant speedup when choosing the tree decomposition according to this concept over simply using an arbitrary one of the same width. |
| Researcher Affiliation | Academia | Institute of Information Systems 184/2 Vienna University of Technology Favoritenstraße 9 11, 1040 Vienna, Austria |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to the source code for the methodology described. |
| Open Datasets | Yes | The full training dataset used for our experiments is available under the following link: www.dbai.tuwien.ac.at/research/project/dflat/ features/training.zip |
| Dataset Splits | Yes | The results that we present in this paper are obtained with parameter settings that were chosen based on several of our previous experiments using 10-fold cross validation. |
| Hardware Specification | Yes | All our experiments were performed on a single core of an AMD Opteron 6308@3.5GHz processor running Debian GNU/Linux 7 (kernel 3.2.0-4-amd64) |
| Software Dependencies | Yes | We evaluate our approach using two recently developed DP solvers, D-FLAT (v. 1.0.1) and SEQUOIA (v. 0.9). The subsequent machine learning tasks were carried out with WEKA 3.6.11 [Hall et al., 2009]. |
| Experiment Setup | No | The paper mentions that parameter settings were chosen based on previous experiments and 10-fold cross-validation, but it does not provide specific hyperparameter values or detailed system-level training configurations for the models used (e.g., learning rates, batch sizes, optimizer details). |