Learning Causal Effects via Weighted Empirical Risk Minimization

Authors: Yonghan Jung, Jin Tian, Elias Bareinboim

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments We consider the following two practical examples shown in Fig. 2, in addition to Example 1. The derivation of target causal effects as weighted distributions by Algo. 1 is provided in Appendix A. 5.2 Experimental Results We evaluate the proposed WERM learning framework against the plug-in estimators in Examples (1,2,3). All variables are binary except that W is set to be a vector of D binary variables to represent high-dimensional covariates. The detailed description of the corresponding SCMs are provided in Appendix D.
Researcher Affiliation Academia Yonghan Jung Department of Computer Science Purdue University University West Lafayette, IN 47907 jung222@purdue.edu Jin Tian Department of Computer Science Iowa State University Ames, IA 50011 jtian@iastate.edu Elias Bareinboim Department of Computer Science Columbia University New York, NY 10027 eb@cs.columbia.edu
Pseudocode Yes Algorithm 1: w ID (x, y, G, P) Algorithm 2: WERM-ID-R(D, G, x, y)
Open Source Code No The paper does not contain an explicit statement about the release of its source code or a link to a code repository for the methodology described.
Open Datasets No We specify a SCM M for each causal graph and generate datasets D from M. This indicates synthetic data generation rather than the use of a publicly available dataset with concrete access information.
Dataset Splits No The paper describes generating datasets from SCMs and generating 'mint = 10^7 samples Dint from Mx' to estimate ground truth, but it does not specify explicit training, validation, and test dataset splits or percentages for the generated data.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper mentions 'gradient boosting regression models' and cites 'xgboost' as an example, but does not provide specific version numbers for any software dependencies.
Experiment Setup No The paper mentions setting H and HW as 'the gradient boosting regression classes' and the use of 'cross-entropy loss function' but does not provide specific hyperparameter values or detailed training configurations required for replication.