Characterization and Learning of Causal Graphs with Latent Variables from Soft Interventions

Authors: Murat Kocaoglu, Amin Jaber, Karthikeyan Shanmugam, Elias Bareinboim

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we investigate the more general setting where multiple observational and experimental distributions are available. We then introduce a novel notion of interventional equivalence class of causal graphs with latent variables based on these invariances, which associates each graphical structure with a set of interventional distributions that respect the do-calculus rules. We provide a formal graphical characterization of this equivalence. Finally, we extend the FCI algorithm, which was originally designed to operate based on CIs, to combine observational and interventional datasets, including new orientation rules particular to this setting.
Researcher Affiliation Collaboration Murat Kocaoglu MIT-IBM Watson AI Lab IBM Research MA, USA murat@ibm.com Amin Jaber Department of Computer Science Purdue University, USA jaber0@purdue.edu Karthikeyan Shanmugam MIT-IBM Watson AI Lab IBM Research NY, USA karthikeyan.shanmugam2@ibm.com Elias Bareinboim Department of Computer Science Columbia University, USA eb@cs.columbia.edu
Pseudocode Yes Algorithm 1 Algorithm for Learning Augmented PAG Algorithm 2 Creating F-nodes. Algorithm 3 Find m-separation sets via Calculus Tests.
Open Source Code No The paper does not contain any statements about releasing source code or links to a code repository for the described methodology.
Open Datasets No The paper is theoretical and algorithm-focused, discussing the use of 'observational and experimental distributions' as input to an algorithm. However, it does not specify or provide access information for any particular public or open dataset used for training or evaluation.
Dataset Splits No The paper is theoretical and algorithmic. It does not perform empirical evaluations on datasets, and therefore does not provide specific details on training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and algorithm-focused. It does not report on empirical experiments that would require specifying hardware used.
Software Dependencies No The paper is theoretical and algorithm-focused. It does not report on empirical experiments and therefore does not list specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and algorithmic, focusing on definitions, characterizations, and algorithm design. It does not describe an empirical experimental setup with hyperparameters or system-level training settings.