Adversarially Balanced Representation for Continuous Treatment Effect Estimation

Authors: Amirreza Kazemi, Martin Ester

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental evaluation on semi-synthetic datasets demonstrates the empirical superiority of ACFR over a range of state-of-the-art methods.We conduct an experimental comparison of ACFR against state-of-the-art methods on semi-synthetic datasets, News and TCGA, and analyze the robustness to varying-levels of treatment-selection bias for the methods.In this section, we present our experimental results.
Researcher Affiliation Academia School of Computing Science, Simon Fraser University {aka208, ester}@sfu.ca
Pseudocode Yes Algorithm 1: Adversarial Counter Factual Regression
Open Source Code Yes The code for synthetic data generation and implementation of the methods can be found at here: https://github.com/amirrezakazemi/acfr
Open Datasets Yes We used TCGA (Network et al. 2013) and News (Johansson, Shalit, and Sontag 2016) semi-synthetic datasets.
Dataset Splits Yes We then split the datasets with 68/12/20 ratio into training, validation, and test sets.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9, and CUDA 11.1').
Experiment Setup No The paper mentions parameters like 'batch size b, iteration number T, inner loop size M, trade-off parameter γ, and the step sizes η1 and η2' in Algorithm 1, but does not provide their specific values or other training configurations such as learning rates or optimizer settings in the main text.