General Control Functions for Causal Effect Estimation from IVs

Authors: Aahlad Puli, Rajesh Ranganath

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate GCFN on low and high dimensional simulated data and on recovering the causal effect of slave export on modern community trust [30]. In section 4, we evaluate GCFN s causal effect estimation on simulated data with the outcome, treatment, and IV observed.
Researcher Affiliation Academia Aahlad Puli Computer Science New York University aahlad@nyu.edu Rajesh Ranganath Computer Science, Center for Data Science New York University rajeshr@cims.nyu.edu
Pseudocode No The paper does not contain any clearly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format.
Open Source Code No The paper does not contain any statements about releasing code or links to a code repository for the described methodology.
Open Datasets Yes We evaluate GCFN on simulated data... We then evaluate GCFN on high-dimensional data using simulations from Deep IV [18] and Deep GMM [7]... We also show recovery of the effect of slave export on current societal trust [30].
Dataset Splits Yes All hyperparameters for VDE, except the mutual-information coefficient κ = λ/(1 + λ), and the outcome-stage were found by evaluating the respective objectives on a held-out validation set.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running the experiments.
Software Dependencies No The paper mentions 'Adam [22]' as an optimizer, but does not specify version numbers for any software, libraries, or frameworks used (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes The encoder in VDE, fθ, is a 2-hidden-layer neural network fθ, which parametrizes a categorical likelihood qθ(ˆz = i | t = t, ϵ = ϵ). The decoder is also a 2-hidden-layer network... In all experiments, the hidden layers in both encoder and decoder networks have 100 units and use Re LU activations. The outcome model is also a 2-hidden-layer neural network with Re LU activations. For the simulated data, the hidden layers in the outcome model have 50 hidden units... we train on 5000 samples with a batch size of 500 for optimizing both VDE and the outcome model for 100 epochs with Adam [22].