Computationally and statistically efficient learning of causal Bayes nets using path queries

Authors: Kevin Bello, Jean Honorio

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Appendix G.1, we tested our algorithms for perfect and imperfect interventions in synthetic networks, in order to empirically show the logarithmic phase transition of the number of interventional samples (see Figure 3 as an example). Appendix G.3 shows experiments on some of these benchmark networks, using the aforementioned algorithms and also our algorithm for learning transitive edges, thus recovering the full networks.
Researcher Affiliation Academia Kevin Bello Department of Computer Science Purdue University West Lafayette, IN, USA kbellome@purdue.edu Jean Honorio Department of Computer Science Purdue University West Lafayette, IN, USA jhonorio@purdue.edu
Pseudocode Yes Algorithm 1. Start with a set of edges ˆE = . Then for each pair of nodes i, j V, compute the noisy path query Q(i, j) and add the edge (i, j) to ˆE if the query returns 1. Finally, compute the transitive reduction of ˆE in poly-time [1], and return ˆE.
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes Finally, in Appendix G.4, as an illustration of the availability of interventional data, we show experimental evidence using three gene perturbation datasets from [33, 9].
Dataset Splits No The paper mentions using synthetic and benchmark networks, but it does not specify explicit training/validation/test splits, percentages, or methodologies for data partitioning needed for reproduction.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments.
Software Dependencies No The paper does not list specific software components with version numbers.
Experiment Setup No The paper does not provide specific details on experimental setup, such as hyperparameter values, optimizer settings, or other training configurations.