Conditional Independence in Testing Bayesian Networks

Authors: Yujia Shen, Haiying Huang, Arthur Choi, Adnan Darwiche

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we illustrate our results on a number of concrete examples, including a case study on Hidden Markov Models. We simulated examples from a third-order HMM and trained both an HMM and a Testing HMM using the structure in Figure 8(a). The cross entropy loss was used to train both the HMM and the Testing HMM using an AC and a TAC, respectively. Our goal was to demonstrate the extent to which a Testing HMM can compensate the modeling error, i.e., the missing dependencies of Ht on Ht 2 and Ht 3. We used data sets with 16, 384 records for each run and 5-fold cross validation to report prediction accuracy as shown in Figure 9.
Researcher Affiliation Academia 1Computer Science Department, University of California, Los Angeles, California, USA. Correspondence to: Yujia Shen <yujias@cs.ucla.edu>.
Pseudocode No The paper does not contain explicit pseudocode or algorithm blocks. It describes processes and structures using text and diagrams.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository.
Open Datasets Yes This is a real-world example comparing the success rates of two treatments for kidney stones (https://en. wikipedia.org/wiki/Simpson%27s_paradox).
Dataset Splits Yes We used data sets with 16, 384 records for each run and 5-fold cross validation to report prediction accuracy as shown in Figure 9.
Hardware Specification No The paper does not provide any specific details about the hardware used for running experiments.
Software Dependencies No The paper does not provide specific software names with version numbers for dependencies.
Experiment Setup Yes We considered all transition models for third-order HMMs such that P(ht | ht 3, ht 2, ht 1) is either 0.95 or 0.05. We assumed binary variables and a chain of length 8. We used uniform initial distributions and emission model P(ht | et) = P( ht | e T ) = 0.99. The cross entropy loss was used to train both the HMM and the Testing HMM using an AC and a TAC, respectively.