Partial Identification of Treatment Effects with Implicit Generative Models

Authors: Vahid Balazadeh Meresht, Vasilis Syrgkanis, Rahul G. Krishnan

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We run our partial identification algorithms on a variety of simulated settings. We mainly focus on the synthetic data generating processes as the ground truth must be known to evaluate our derived bounds properly. Our primary goal is to show that using uniform average treatment derivatives instead of directly optimizing the average treatment effect will result in tighter and more stable bounds.
Researcher Affiliation Academia Vahid Balazadeh1 Vasilis Syrgkanis2 Rahul G. Krishnan1 1University of Toronto, Vector Institute 2Stanford University
Pseudocode No The paper describes the algorithmic steps in narrative text within Section 4.3 and Appendix D, but it does not include a formally structured block labeled 'Pseudocode' or 'Algorithm'.
Open Source Code Yes Our code is accessible at https://github.com/rgklab/partial_identification
Open Datasets Yes We also test our method on an ACIC dataset, a case study with real-world covariates, to illustrate the performance of our algorithm on higher-dimensional datasets.
Dataset Splits Yes For the Jobs dataset (Dua and Graff, 2017), we use 12,000 samples for training and 2,000 for testing, similar to Guo et al. [2022].
Hardware Specification Yes Experiments were run on two a single machine with 2 Nvidia RTX A5000 GPUs, 2 RTX A6000 cards GPUs, 96 CPUs and 188GB of RAM and on a larger internal cluster comprising hundreds of GPUs.
Software Dependencies No The paper mentions software components like 'neural networks', 'GANs', and 'Sinkhorn Generative Networks' but does not specify their version numbers (e.g., PyTorch 1.x, TensorFlow 2.x).
Experiment Setup Yes Our implementation details, as well as hyper-parameters can be found in Appendix F.