Causal Estimation with Functional Confounders

Authors: Aahlad Puli, Adler Perotte, Rajesh Ranganath

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Further, we prove error bounds on LODE s effect estimates, evaluate our methods on simulated and real data, and empirically demonstrate the value of EFC. We evaluate LODE on simulated data first and show that LODE can correct for confounding. We also investigate the error induced by imperfect estimation of the surrogate intervention in LODE. Further, we run LODE on a GWAS dataset [6] and demonstrate that LODE is able to correct for confounding and recovers genetic variations that have been reported relevant to Celiac disease [8, 25, 14, 1].
Researcher Affiliation Academia 1Computer Science, New York University, New York, NY 10011 2Biomedical Informatics, Columbia University, New York, NY 10032 3Center for Data Science, New York University, New York, NY 10011
Pseudocode Yes See algorithm 1 for a description.
Open Source Code No The paper does not explicitly state that source code for their methodology is provided or available, nor does it provide a link to a code repository.
Open Datasets Yes We utilize data from the Wellcome Trust Celiac disease GWAS dataset [8, 6] consisting of individuals with celiac disease, called cases (n = 3796), and controls (n = 8154).
Dataset Splits Yes We use a 60 40% train-test split, and outcome model selection is done via cross-validation within the training data (60% of the dataset). We did 5-fold cross-validation using just the training set.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments, only referring to 'simulated data' and general experimental settings.
Software Dependencies No The paper mentions using 'Scikit-learn [18]' and 'kernel ridge regression' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes Let the dimension of t (pre-outcome variables) be T = 20 and outcome noise be η N(0, 0.1). We train on 1000 samples and report conditional effect root-mean-squared error (RMSE), computed with another 1000 samples. We used a degree-2 kernel ridge regression to fit the outcome model as a function of t. We use a 60 40% train-test split, and outcome model selection is done via cross-validation within the training data (60% of the dataset). ... The best outcome model was a Lasso model, trained with regularization constant 10. We select relevant SNPs by thresholding estimated effects at a magnitude > 0.1.