Validating Causal Inference Methods

Authors: Harsh Parikh, Carlos Varjao, Louise Xu, Eric Tchetgen Tchetgen

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate Credence s ability to accurately assess the relative performance of causal estimation techniques in an extensive simulation study and two real-world data applications from Lalonde and Project STAR studies.
Researcher Affiliation Collaboration 1Duke University, Durham NC, USA 2Amazon.com, Seattle WA, USA 3The Wharton School, University of Pennsylvania, Philadelphia PA, USA.
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a direct statement or link for the open-source code of the methodology described.
Open Datasets Yes We demonstrate Credence s ability to accurately assess the relative performance of causal estimation techniques in an extensive simulation study and two real-world data applications from Lalonde and Project STAR studies.
Dataset Splits No The paper describes training Credence on 'a single observed sample' or 'the observational component' of datasets, but does not specify explicit train/validation/test splits in percentages or sample counts for model training.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions several software packages and libraries such as 'grf R package', 'Econ ML', and 'scikit-learn', but does not specify their version numbers.
Experiment Setup Yes For Quadratic DGP... (Figure 3(b)). (2) For the second one, we constraint both f(X) and g(X, T) to be equal to zero for all X and T. (3) Lastly, for the third one, we shrink both f(X) towards zero but constraint g(X, T) = 0.15(2T 1) to understand the sensitivity of different methods to unobserved confounding.