Estimating Conditional Average Treatment Effects via Sufficient Representation Learning
Authors: Pengfei Shi, Wei Zhong, Xinyu Zhang, Ningtao Wang, Xing Fu, Weiqiang Wang, Yin Jin
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical simulations and empirical results demonstrate that our method outperforms the competitive approaches. In Section 4, we conduct Monte Carlo simulation and experiments. We apply our approach on three datasets: i) synthetic datasets, where the covariates and outcomes are all simulated so that we know the true CATE; ii) semi-synthetic datasets, Infant Health and Development Program (IHDP), where the covariates are real and the outcomes are simulated, hence we also know the true CATE; iii) real datasets, Jobs, where the covariates and the outcomes are all real, so we do not know the true CATE. |
| Researcher Affiliation | Collaboration | 1Xiamen University, Xiamen, China 2Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China 3Ant Group, Hangzhou, China |
| Pseudocode | Yes | Algorithm 1 Cross Net |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the proposed method ("Cross Net") is openly available. |
| Open Datasets | Yes | We conduct experiments on the well known dataset IHDP created by [Hill, 2011], where the covariates are real and the outcomes are simulated. We also apply our approach to the Job dataset first analyzed by [La Londe, 1986]. |
| Dataset Splits | Yes | We split the dataset into train/validation/test sets and repeat 10 times to take average of the results. |
| Hardware Specification | No | The paper does not provide specific details about the hardware specifications used for running the experiments. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers required for replication. |
| Experiment Setup | No | The paper states general experimental settings like sample sizes and number of repetitions (e.g., "For each simulation setting, we conducted experiments with different sample sizes, n = 500, 1000, 2000, 5000. Each experiment was repeated 10 times") and that the network architecture is "the same", but it does not provide concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific optimizer settings for reproducibility. |