Learning Diverse Bayesian Networks
Authors: Cong Chen, Changhe Yuan7793-7800
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluations show that our top mode models have much better diversity as well as accuracy in discovering true underlying models than those found by K-Best. and Experiments We implemented and tested our proposed method (named M-Mode-BNs for short) on top of URLearning 2. |
| Researcher Affiliation | Academia | Cong Chen, Changhe Yuan Graduate Center and Queens College City University of New York {cong.chen, changhe.yuan}@qc.cuny.edu |
| Pseudocode | Yes | Finally, a pseudo code of the A* algorithm is presented in Algorithm 1. |
| Open Source Code | No | The paper references third-party software (URLearning and K-Best Software) with links, but does not provide a link or explicit statement for their own implemented code. |
| Open Datasets | Yes | We selected several discrete benchmark models from bnlearn Bayesian Network Repository 4, including Survey (Scutari and Denis 2014), Asia (Lauritzen and Spiegelhalter 1988), Sachs (Sachs et al. 2005) and Child (Spiegelhalter et al. 1993). 4www.bnlearn.com |
| Dataset Splits | No | The paper describes how data sets were generated and sampled (e.g., 'randomly sampled 10 data sets with 100 data points each' or '10 random data sets were sampled for each size'), but does not provide explicit training, validation, and test splits (percentages, counts, or specific predefined splits) to reproduce the data partitioning. |
| Hardware Specification | Yes | Our experiments were performed on an IBM System with 32 core 2.67GHz Intel Xeon Processors and 512G RAM. |
| Software Dependencies | No | The paper states 'The program was written in C++ using the GNU compiler G++ on a Linux system. We also used functions from an R package called bnlearn', but does not provide specific version numbers for any of these software components. |
| Experiment Setup | No | The paper discusses the values of the `delta` parameter and the number of top solutions, but it does not provide concrete hyperparameter values or detailed system-level training settings typically found in an 'experimental setup' section (e.g., learning rates, batch sizes, optimizer configurations). |