Learning and Testing Causal Models with Interventions
Authors: Jayadev Acharya, Arnab Bhattacharyya, Constantinos Daskalakis, Saravanan Kandasamy
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | The main highlight of our work is that we prove bounds on the number of samples, interventions, and time steps required by our algorithms. Our algorithms are enabled by a new subadditivity inequality for the squared Hellinger distance between two causal Bayesian networks. |
| Researcher Affiliation | Academia | Jayadev Acharya School of ECE Cornell University acharya@cornell.edu Arnab Bhattacharyya National University of Singapore & Indian Institute of Science arnabb@iisc.ac.in Constantinos Daskalakis EECS MIT costis@csail.mit.edu Saravanan Kandasamy STCS Tata Institute of Fundamental Research saravan.tuty@gmail.com |
| Pseudocode | Yes | Algorithm 1: Algorithm for C2ST(G, ϵ) |
| Open Source Code | No | The paper does not provide any links to open-source code or explicitly state that code for the described methodology is being released. |
| Open Datasets | No | This paper is theoretical, focusing on algorithms and complexity bounds, and does not involve empirical evaluation on datasets. Therefore, it does not specify public dataset availability. |
| Dataset Splits | No | This paper is theoretical and does not involve empirical evaluation on datasets, thus it does not specify training/validation/test splits. |
| Hardware Specification | No | As the paper presents theoretical work without empirical experiments, it does not describe any hardware specifications. |
| Software Dependencies | No | The paper focuses on theoretical algorithms and their complexity, and does not include details on software dependencies with specific version numbers for experimental reproducibility. |
| Experiment Setup | No | The paper is theoretical and does not detail an experimental setup, including hyperparameters or system-level training settings, as it does not conduct empirical experiments. |