Turbocharging Treewidth-Bounded Bayesian Network Structure Learning

Authors: Vaidyanathan Peruvemba Ramaswamy, Stefan Szeider3895-3903

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that our method improves the score of BNs provided by state-of-the-art heuristic methods, often significantly. We implement BN-SLIM and evaluate it empirically on a large set of benchmark data sets
Researcher Affiliation Academia Vaidyanathan Peruvemba Ramaswamy and Stefan Szeider Algorithms and Complexity Group, TU Wien, Vienna, Austria
Pseudocode No The paper describes algorithms and methods in detail but does not include a clearly labeled "Pseudocode" or "Algorithm" block.
Open Source Code Yes The source code is attached as supplementary material, and we intend to make it publicly available.
Open Datasets Yes We consider 99 data sets for our experiments. 84 of these come from real-world benchmarks... These benchmarks are publicly available3 in the form of pre-partitioned data sets. The remaining 15 data sets are classified as synthetic as they are obtained by drawing 5000 samples from known BNs... commonly used in the literature as benchmarks4.
Dataset Splits No The paper mentions running experiments on data sets and discusses improvement over time, but it does not specify explicit training, validation, or test data splits (e.g., percentages or counts) or cross-validation strategies.
Hardware Specification Yes We run all our experiments on a 4-core Intel Xeon E5540 2.53 GHz CPU, with each process having access to 8GB RAM.
Software Dependencies Yes We implement the local improvement algorithm in Python 3.6.9, using the Network X 2.4 graph library (Hagberg, Schult, and Swart 2008).
Experiment Setup Yes We tested out budget values 7, 10, and 17, and timeout values 1s, 2s, and 5s, and finally settled on a budget of 10 and a timeout of 2 seconds for our experiments.