Deep IV: A Flexible Approach for Counterfactual Prediction
Authors: Jason Hartford, Greg Lewis, Kevin Leyton-Brown, Matt Taddy
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that it outperforms existing machine learning approaches. We evaluated our approach on both simulated and real data. We used simulations to assess Deep IV s ability to recover an underlying counterfactual function both in a lowdimensional domain with informative features and in a highdimensional domain with features consisting of pixels of a handwritten image. |
| Researcher Affiliation | Collaboration | 1University of British Columbia, Canada 2Microsoft Research, New England, USA. |
| Pseudocode | No | The paper describes procedures in narrative text but does not include any explicitly labeled pseudocode blocks or algorithms. |
| Open Source Code | Yes | Implementation with all simulation experiments is available at https://github.com/jhartford/Deep IV |
| Open Datasets | Yes | High dimensional feature space In real applications, we do not typically get to observe variables like customer type that cleanly delineate our training examples into explicit classes, but may instead observe a large number of features that correlate with such types. To simulate this, we replaced the customer type label s {0,1,...,7} with the pixels of the corresponding handwritten digit from the MNIST dataset (Le Cun & Cortes, 2010). |
| Dataset Splits | No | In both stages hyper-parameters can be chosen to minimize the respective loss functions on a held out validation set, and improvements in performance against this metric will correlate with improvements on the true structural loss which cannot be evaluated directly. |
| Hardware Specification | No | We would also like to thank Holger Hoos for the use of the Ada cluster, without which the experiments would not have been possible. |
| Software Dependencies | No | The paper mentions software components like 'R implementation' and optimization methods like 'SGD' and 'Adam', but it does not provide specific version numbers for any libraries or frameworks used in their implementation. |
| Experiment Setup | No | Full details of model architectures and hyperparameter choices for all the models are given in the Appendix. |