An Adaptive Kernel Approach to Federated Learning of Heterogeneous Causal Effects

Authors: Thanh Vinh Vo, Arnab Bhattacharyya, Young Lee, Tze-Yun Leong

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments The baselines. In this section, we first carry out the experiments to examine the performance of Causal RFF against standard baselines such as BART (Hill 2011), TARNet (Shalit et al. 2017), CFRwass (CFRNet with Wasserstein distance) (Shalit et al. 2017), CFR-mmd (CFRNet with maximum mean discrepancy distance) (Shalit et al. 2017), CEVAE (Louizos et al. 2017), Ortho RF (Oprescu et al. 2019), X-learner (Künzel et al. 2019), R-learner (Nie and Wager 2020), and Fed CI (Vo et al. 2022).
Researcher Affiliation Collaboration 1School of Computing, National University of Singapore 2Roche AG and Harvard University
Pseudocode No The paper describes its methods verbally and mathematically but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes 1Source code: https://github.com/vothanhvinh/Causal RFF
Open Datasets Yes The Infant Health and Development Program (IHDP) (Hill 2011) is a randomized study on the impact of specialist visits (the treatment) on the cognitive development of children (the outcome).
Dataset Splits Yes In each source, we use 50 data points for training, 450 for testing and 400 for validating.
Hardware Specification No The provided text does not contain specific details about the hardware (e.g., GPU/CPU models, memory) used for the experiments.
Software Dependencies No The paper mentions software packages like 'Bart Py', 'causalml', and 'econml', but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes For all methods, the learning rate is fine-tuned from 10 4 to 10 1 with step size of multiplication by 10. Similarly, the regularizer factors are also fine-tuned from 10 4 to 100 with step size of multiplication by 10. We report two error metrics: ϵPEHE (precision in estimation of heterogeneous effects) and ϵATE (absolute error) to compare the methods.