Tightening Bounds for Bayesian Network Structure Learning

Authors: Xiannian Fan, Changhe Yuan, Brandon Malone

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results show that these bounds improve the efficiency of Bayesian network learning by two to three orders of magnitude. We empirically tested our new proposed tightened bounds using the BFBn B algorithm 1. We use benchmark datasets from the UCI machine learning repository and Bayesian Network Repository. The experiments were performed on an IBM System x3850 X5 with 16 core 2.67GHz Intel Xeon Processors and 512G RAM; 1.7TB disk space was used. Results on Upper Bounds We first tested the effect of the upper bounds generated by AWA* on BFBn B on two datasets: Parkinsons and Steel Plates.
Researcher Affiliation Academia Xiannian Fan, Changhe Yuan Graduate Center and Queens College City University of New York 365 Fifth Avenue, New York 10016 {xfan2@gc, change.yuan@qc}.cuny.edu Brandon Malone Helsinki Institute for Information Technology Department of Computer Science Fin-00014 University of Helsinki, Finland brandon.malone@cs.helsinki.fi
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide an explicit statement about the availability of open-source code for the methodology described, nor does it provide a link to a repository.
Open Datasets Yes We use benchmark datasets from the UCI machine learning repository and Bayesian Network Repository.
Dataset Splits No No specific information about training, validation, or test dataset splits (e.g., percentages or sample counts) is provided in the paper.
Hardware Specification Yes The experiments were performed on an IBM System x3850 X5 with 16 core 2.67GHz Intel Xeon Processors and 512G RAM; 1.7TB disk space was used.
Software Dependencies No The paper mentions software components like SCIP, but does not provide specific version numbers for any software dependencies needed to replicate the experiments.
Experiment Setup No No specific experimental setup details, such as hyperparameter values (e.g., learning rate, batch size) or detailed training configurations, are provided in the main text.