Empirical Bayesian Approaches for Robust Constraint-based Causal Discovery under Insufficient Data

Authors: Zijun Cui, Naiyu Yin, Yuru Wang, Qiang Ji

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show significant performance improvement in terms of both accuracy and efficiency over SOTA methods. We evaluate both the local and global constraint-based causal discovery performance on benchmark datasets.
Researcher Affiliation Academia 1Rensselaer Polytechnic Institute 2Northeast Normal University
Pseudocode No The paper describes methods mathematically and textually but does not contain any blocks explicitly labeled 'Pseudocode' or 'Algorithm', nor are there any structured steps formatted like code.
Open Source Code No The paper does not contain any statement about releasing source code or a link to a repository for the methods developed in the paper.
Open Datasets Yes We employ six benchmark datasets1 that are widely used for causal discovery evaluation: CHILD, INSURANCE, ALARM, HAILFINDER, CHILD3 and CHILD5. 1https://www.bnlearn.com/bnrepository/.
Dataset Splits No The paper mentions evaluating performance on benchmark datasets with varying 'sample sizes' and repeating runs, but does not provide specific train/validation/test dataset splits, percentages, or absolute sample counts for partitioning data.
Hardware Specification Yes Experiments are performed on a laptop with a 8-Core Intel Core i9 processor with CPU only.
Software Dependencies No The paper does not list specific software dependencies with version numbers required for reproducibility.
Experiment Setup No The paper describes general 'Experiment Settings' including datasets and evaluation metrics, but it does not provide specific hyperparameters (e.g., learning rate, batch size, epochs, optimizer settings) or other detailed system-level training configurations.