Foundations of Testing for Finite-Sample Causal Discovery

Authors: Tom Yan, Ziyu Xu, Zachary Chase Lipton

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through empirical simulations, we confirm the usefulness of our framework.
Researcher Affiliation Academia 1Machine Learning Department, Carnegie Mellon University, Pittsburgh, USA 2Department of Statistics and Data Science, Carnegie Mellon University, Pittsburgh, USA.
Pseudocode Yes Algorithm 1 Causal Discovery Algorithm Template
Open Source Code No The paper does not contain any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper generates synthetic data for its experiments: 'We consider two classes of graphs. (1) Erdos-Renyi graphs with varying number of nodes and density... (2) tree graphs with n... These are used to generate the graph skeleton.' It does not use or provide access to a publicly available or open dataset.
Dataset Splits No The paper describes simulation configurations with varying interventional samples and trials, but it does not specify traditional training, validation, or test dataset splits for reproducibility.
Hardware Specification No The paper does not specify any hardware (e.g., GPU, CPU models, memory) used for running the experiments or simulations.
Software Dependencies No The paper does not list any specific software dependencies or versions (e.g., programming languages, libraries, frameworks) required to reproduce the experiments.
Experiment Setup Yes In the experiment, we fix b = 0.1, variance 1 and the interventional value ν = 1. We vary the number of interventional samples {100, 500, 1000, 5000, 10000}, tolerated error rate α {0.1, 0.2} and edge strength k {0.1, 0.2, 1, 2, 10}, all of which affect hypotheses testing (i.e. number of orientations).