A Dual-module Framework for Counterfactual Estimation over Time

Authors: Xin Wang, Shengfei Lyu, Lishan Yang, Yibing Zhan, Huanhuan Chen

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we validate the effectiveness of the proposed ACTIN through a series of experiments. Following the conventional workflow of counterfactual inference benchmarks (Melnychuk et al., 2022), we conduct comparative analyses of ACTIN against existing models on both simulated and real datasets. Subsequently, we examine in detail the running time and complexity of the baseline methods and ACTIN on different datasets. Ultimately, we experimentally explore the roles of different components in ACTIN.
Researcher Affiliation Collaboration 1School of Computer Science and Technology, University of Science and Technology of China, China 2Nanyang Technological University, Singapore 3JD Explore Academy.
Pseudocode Yes Algorithm 1 Pseudocode of Training ACTIN
Open Source Code Yes Code is available online: https://github.com/ waxin/ACTIN
Open Datasets Yes The Medical Information Mart for Intensive Care III (MIMIC-III) (Johnson et al., 2016) is a comprehensive database of electronic health records for patients in intensive care units, often used to evaluate the effectiveness of models in real and complex medical scenarios. [...] Mimic-iii, a freely accessible critical care database. Scientific data, 3(1):1 9, 2016.
Dataset Splits Yes This cohort was then distributed into training, validation, and testing datasets in a 70%/15%/15% proportion. [...] the dataset comprising 1,000 patients is partitioned into training, validation, and testing sets, adhering to a 60%/20%/20% distribution ratio.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions implementing ACTIN using the Pytorch Lightning framework and choosing the Adam algorithm for gradient optimization, but it does not specify version numbers for these software dependencies or any other key libraries.
Experiment Setup Yes we conduct hyperparameter optimization for all baseline models and ACTIN using random searches. The ranges for the random searches for RMSN, CRN, G-Net, and CT are provided in Tables 6, 7, 8, and 9, respectively. The random search space for ACTIN is outlined in Table 10.