Uncertainty Quantification in Heterogeneous Treatment Effect Estimation with Gaussian-Process-Based Partially Linear Model

Authors: Shunsuke Horii, Yoichi Chikahara

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results show that even in the small sample size setting, our method can accurately estimate the heterogeneous treatment effects and effectively quantify its estimation uncertainty. and 6 Experiments
Researcher Affiliation Collaboration Shunsuke Horii1, Yoichi Chikahara2 1Center for Data Science, Waseda University, Tokyo, Japan 2NTT Communication Science Laboratories, Kyoto, Japan
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks explicitly labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code Yes Our code is publicly available at https://github.com/holyshun/GP-PLM.
Open Datasets Yes Synthetic data: As with Nie and Wager (2021), we prepared synthetic data that follow the partially linear model (4). and Semi-synthetic data: For binary treatment setup, we used the Atlantic Causal Inference Conference (ACIC) dataset (Shimoni et al. 2018) and The data of d = 177 features come from the Linked Birth and Infant Death Data (LBIDD) (Mac Dorman and Atkinson 1998)
Dataset Splits No The paper specifies training and test data sizes ('n = 200 or n = 500 observations as training data and m = 100 observations as test data') but does not explicitly mention a validation split or cross-validation strategy.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., library names with specific versions like 'Python 3.8, PyTorch 1.9').
Experiment Setup No The paper mentions general experimental settings like the number of experiments and observations, and that hyperparameters are optimized using 'grid search and the gradient descent method,' but it does not provide specific hyperparameter values or detailed training configurations (e.g., learning rate, batch size).