Learning Bayesian Networks with Low Rank Conditional Probability Tables
Authors: Adarsh Barik, Jean Honorio
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We formally prove that our method correctly recovers the true directed structure, runs in polynomial time and only needs polynomial samples with respect to the number of nodes. We also provide further improvements in efficiency if we have access to some observational data. For synthetic experiments validating our theory, please See Appendix D. |
| Researcher Affiliation | Academia | Adarsh Barik Department of Computer Science Purdue University West Lafayette, Indiana, USA abarik@purdue.edu Jean Honorio Department of Computer Science Purdue University West Lafayette, Indiana, USA jhonorio@purdue.edu |
| Pseudocode | Yes | Algorithm 1: get Parentsp V q; Algorithm 2: get Terminal Nodesp Sq |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper discusses theoretical results and sample complexity for learning Bayesian networks but does not refer to specific datasets used for training or their public availability. It mentions 'observational data' and 'black-box queries' but no concrete dataset names or access information. |
| Dataset Splits | No | The paper is theoretical and focuses on proving properties of the proposed method (e.g., polynomial time and sample complexity). It does not describe experiments with dataset splits (training, validation, test) for reproducibility. |
| Hardware Specification | No | The paper is theoretical and focuses on algorithms and proofs. It does not mention any specific hardware used for computation or experiments. |
| Software Dependencies | No | The paper is theoretical and does not mention any specific software dependencies or version numbers required to implement or run the described algorithms. |
| Experiment Setup | No | The paper is theoretical, presenting algorithms and their theoretical guarantees. It does not provide specific details such as hyperparameters, training configurations, or system-level settings for experimental setup. While it mentions 'For synthetic experiments validating our theory, please See Appendix D', these details are not in the main text. |