Unified Covariate Adjustment for Causal Inference
Authors: Yonghan Jung, Jin Tian, Elias Bareinboim
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments corroborate the scalability and robustness of the proposed framework. We demonstrate scalability and robustness to bias both theoretically and empirically through simulations. |
| Researcher Affiliation | Academia | Yonghan Jung1, Jin Tian2, and Elias Bareinboim3 1Purdue University 2Mohamed bin Zayed University of Artificial Intelligence 3Columbia University |
| Pseudocode | Yes | Algorithm 1: Tian-to-UCA(G, V := (V1, , VK, Y )) Algorithm 2: DML-UCA({Di}, L) |
| Open Source Code | No | The paper states in the NeurIPS checklist that 'Every detail in the simulations is sufficiently provided to reproduce the results' but does not provide a direct link to a code repository or explicitly state that the code is open-source. |
| Open Datasets | No | The paper states in Appendix F: 'We define the following structural causal models:... We include a segment of the code employed to generate the dataset.' This indicates synthetic data generation rather than the use of a publicly available dataset with concrete access information. |
| Dataset Splits | Yes | (Sample splitting) For each i [m + 1], randomly split Di iid P i into L-folds. Let Di ℓdenote the ℓ-th partition, and define Di ℓ:= Di \ Di ℓ. |
| Hardware Specification | No | Appendix F, in the XGBoost parameterization section, mentions 'n_jobs: 4 # Assuming you have 4 cores', but no specific hardware models (CPU, GPU, or cloud instances) are provided. |
| Software Dependencies | No | Appendix F states: 'As described in Sec. 4, we used the XGBoost (Chen and Guestrin, 2016) as a model for estimating nuisances. We implemented the model using Python.' No specific version numbers for Python or XGBoost are given. |
| Experiment Setup | Yes | Appendix F details parameterizations for XGBoost, including 'mu_params' and 'pi_params' dictionaries with specific values for 'booster', 'eta', 'gamma', 'max_depth', 'min_child_weight', 'subsample', 'colsample_bytree', 'lambda', 'alpha', 'objective', 'eval_metric', and 'n_jobs'. |