Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Learning Causal Networks via Additive Faithfulness

Authors: Kuang-Yao Lee, Tianqi Liu, Bing Li, Hongyu Zhao

JMLR 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through simulation studies we show that our method outperforms existing methods when commonly assumed conditions such as Gaussian or Gaussian copula distributions do not hold. Finally, the usefulness of AFDAG formulation is demonstrated through an application to a proteomics data set.
Researcher Affiliation Collaboration Kuang-Yao Lee EMAIL Department of Statistical Science, Temple University 1810 Liacouras Walk, Philadelphia, PA 19122 Tianqi Liu EMAIL Google LLC 111 8th Ave, New York, NY 10011 Bing Li EMAIL Department of Statistics, Pennsylvania State University 326 Thomas Building, University Park, PA 16802 Hongyu Zhao EMAIL Department of Biostatistics, Yale School of Public Health 60 College Street, New Haven, CT 06520
Pseudocode Yes Pseudo codes: AF-PC skeleton-algorithm initialize: set l = 1 and ESKE to be the complete graph repeat
Open Source Code No The paper does not explicitly provide a link to source code, state that code is released in supplementary materials, or make an unambiguous affirmative statement about code availability for the methodology described.
Open Datasets Yes We next apply our method, the linear-PC, HSIC-PC, and KCI-PC to a flow cytometry data set from Sachs et al. (2005), in which p = 11 protein activities levels were measured on n = 7466 cells.
Dataset Splits No We conduct a stability analysis by first drawing a subsample of 2,000 cells. For each subsample, we then compute the SHD(ˆECPDAG) from all competing methods. This process is repeated 20 times and the averaged SHDs and standard deviations are reported in Table 3. AF-PC performs the best among all competing methods.
Hardware Specification No This research includes calculations carried out on HPC resources supported in part by the National Science Foundation (NSF) through major research instrumentation grant number 1625061 and by the US Army Research Laboratory under contract number W911NF-162-0189.
Software Dependencies No The paper mentions algorithms like linear-PC, rank-PC, HSIC-PC, and KCI-PC but does not list any specific software libraries or their version numbers that are critical for reproducing the experiments.
Experiment Setup Yes We fix the network size at p = 5, the sparse parameter at d = 0.1, and vary the sample size n between 50, 100, and 300. We also fix the number of resamplings in the approximated permutation test b = 5000.