Covariate balancing using the integral probability metric for causal inference
Authors: Insung Kong, Yuha Park, Joonhyuk Jung, Kwonsang Lee, Yongdai Kim
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate the superiority of the CBIPM to existing baselines by analyzing simulated and real datasets. In Section 5.1 and 5.2, we present the experimental results using simulation and real datasets respectively. |
| Researcher Affiliation | Academia | Insung Kong 1 Yuha Park 1 Joonhyuk Jung 1 Kwonsang Lee 1 Yongdai Kim 1 1Department of Statistics, Seoul National University. |
| Pseudocode | Yes | Algorithm 1 Proposed algorithm for the ATT |
| Open Source Code | Yes | The code is available at https://github.com/ggong369/CBIPM. |
| Open Datasets | Yes | We generate simulated datasets using the Kang-Schafer example (Kang & Schafer, 2007). The Tennessee Student/Teacher Achievement Ratio experiment (STAR) is a 4-year longitudinal class-size study... (Achilles et al., 2008). |
| Dataset Splits | No | The paper does not specify exact train/validation/test split percentages or sample counts for any of the datasets used, nor does it refer to predefined splits with citations for these purposes. |
| Hardware Specification | Yes | We use R (ver. 4.0.2), Python (ver. 3.6), and NVIDIA TITAN Xp GPUs to obtain the estimates of the ATT and the ATE. |
| Software Dependencies | Yes | We use R (ver. 4.0.2), Python (ver. 3.6)... We use twang package (Ridgeway et al., 2017)... CBPS is implemented using CBPS package (Fong et al., 2022)... EB is implemented using EB package (Hainmueller & Hainmueller, 2022)... We use Adam (Kingma & Ba, 2014) optimizer... |
| Experiment Setup | Yes | For both the P-CBIPM and the N-CBIPM, we use a neural network with 100 hidden nodes with leaky relu. We use Adam (Kingma & Ba, 2014) optimizer with lr = 0.03 and T = 1000 for gradient descent steps, and Adam optimizer with lradv = 0.3, Tadv = 5 for gradient ascent steps. τ = 0.3 and R = 100 are used. |