Adapting Neural Networks for the Estimation of Treatment Effects

Authors: Claudia Shi, David Blei, Victor Veitch

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Studies on benchmark datasets for causal inference show these adaptations outperform existing methods. Code is available at github.com/claudiashi57/dragonnet. ... 5 Experiments Do Dragonnet and targeted regularization improve treatment effect estimation in practice? ... We study the methods empirically using two semi-synthetic benchmarking tools. We find that Dragonnet and targeted regularization substantially improve estimation quality.
Researcher Affiliation Academia Claudia Shi1, David M. Blei1,2, and Victor Veitch2 1Department of Computer Science, Columbia Unitversity 2Department of Statistics, Columbia University
Pseudocode No The paper describes the architecture (Figure 1) and training objectives using text and mathematical equations, but it does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes Code is available at github.com/claudiashi57/dragonnet.
Open Datasets Yes IHDP. Hill [Hil11] introduced a semi-synthetic dataset constructed from the Infant Health and Development Program (IHDP). ... Following [SJS16], we use 1000 realizations from the NPCI package [Dor16]. ... ACIC 2018. We also use the IBM causal inference benchmarking framework, which was developed for the 2018 Atlantic Causal Inference Conference competition data (ACIC 2018) [Shi+18].
Dataset Splits Yes For IHDP experiments, we follow established practice [e.g. SJS16]. We randomly split the data into test/validation/train with proportion 63/27/10 and report the in sample and out of sample estimation errors.
Hardware Specification No The GPUs used for this research were donated by the NVIDIA Corporation.
Software Dependencies No The paper mentions using neural networks and logistic regression but does not provide specific software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch versions) used for implementation.
Experiment Setup Yes For Dragonnet and targeted regularization, we set the hyperparameters α in equation 2.2 and β in equation 3.2 to 1. For all models, the hidden layer size is 200 for the shared representation layers and 100 for the conditional outcome layers. We train using stochastic gradient descent with momentum.