Comparing Causal Frameworks: Potential Outcomes, Structural Models, Graphs, and Abstractions

Authors: Duligur Ibeling, Thomas Icard

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical The aim of this paper is to make clear and precise the relationship between the Rubin causal model (RCM) and structural causal model (SCM) frameworks for causal inference. Adopting a neutral logical perspective, and drawing on previous work, we show what is required for an RCM to be representable by an SCM. A key result then shows that every RCM including those that violate algebraic principles implied by the SCM framework emerges as an abstraction of some representable RCM. Finally, we illustrate the power of this conciliatory perspective by pinpointing an important role for SCM principles in classic applications of RCMs; conversely, we offer a characterization of the algebraic constraints implied by a graph, helping to substantiate further comparisons between the two frameworks.
Researcher Affiliation Academia Duligur Ibeling Department of Computer Science Stanford University duligur@stanford.edu Thomas Icard Department of Philosophy Stanford University icard@stanford.edu
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not mention providing open-source code for the described methodology.
Open Datasets No The paper is theoretical and does not involve training on datasets; it focuses on formal frameworks and proofs.
Dataset Splits No The paper is theoretical and does not involve data splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any computational experiments that would require specific hardware specifications.
Software Dependencies No The paper is theoretical and does not mention any software dependencies with specific version numbers.
Experiment Setup No The paper is theoretical and does not describe any experiments that would involve hyperparameters or training settings.