Neural Causal Models for Counterfactual Identification and Estimation
Authors: Kevin Muyuan Xia, Yushu Pan, Elias Bareinboim
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 EXPERIMENTAL EVALUATION We first evaluate the NCM s ability to identify counterfactual distributions through Alg. 3. 9 Each setting consists of a target query (Q), a causal diagram (G), and a set of input distributions (Z). In total, we test 32 variations. Specifically, we evaluate the identifiability of four queries Q: (1) Average Treatment Effect (ATE), (2) Effect of Treatment on the Treated (ETT) (Pearl, 2000, Eq. 8.18), (3) Natural Direct Effect (NDE) (Pearl, 2001, Eq. 6), and (4) Counterfactual Direct Effect (Ctf DE) (Zhang & Bareinboim, 2018, Eq. 3); each expression is shown on the top of Fig. 4. |
| Researcher Affiliation | Academia | Kevin Xia and Yushu Pan and Elias Bareinboim Causal Artificial Intelligence Laboratory Columbia University, USA {kevinmxia,yushupan,eb}@cs.columbia.edu |
| Pseudocode | Yes | Algorithm 1: Neural ID Identifying/estimating counterfactual queries with NCMs. Algorithm 2: NCM Counterfactual Sampling. Algorithm 3: Training Model. |
| Open Source Code | Yes | The code is publicly available at: https://github.com/CausalAILab/NCMCounterfactuals |
| Open Datasets | No | The paper refers to 'a collection of available interventional (or observational if Zk = ) distributions from M' and 'empirical versions of such distributions in the form of finite datasets', but it does not specify a publicly available dataset by name, provide a link, or a formal citation with author/year for any dataset used. |
| Dataset Splits | No | The paper mentions using 'finite datasets' but does not specify exact split percentages, sample counts for training, validation, and test sets, or refer to any predefined splits with citations. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions using a 'generative adversarial approach' and references various deep learning and causal inference methods and tools (e.g., PyTorch for automatic differentiation, Adam optimizer, Karpathy's pytorch-made), but it does not provide a list of specific software dependencies with version numbers needed to replicate the experiments. |
| Experiment Setup | No | The paper states 'More details about architecture and hyperparameters used throughout this work can be found in Appendix B', indicating that specific experimental setup details like concrete hyperparameter values or training configurations are not present in the main text. |