Estimating Treatment Effects from Irregular Time Series Observations with Hidden Confounders
Authors: Defu Cao, James Enouen, Yujing Wang, Xiangchen Song, Chuizheng Meng, Hao Niu, Yan Liu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Furthermore, we conduct extensive experiments on both synthetic and real-world datasets to demonstrate the effectiveness and scalability of Lip CDE. |
| Researcher Affiliation | Collaboration | 1University of Southern California 2Peking University 3Carnegie Mellon University 4KDDI Research, Inc. |
| Pseudocode | No | The paper describes the model architecture and components, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | Yes | In this section, we estimate the treatment effects for each time step by one-step ahead predictions on both synthetic dataset and real-world datasets including MIMIC-III (Johnson et al. 2016) dataset and COVID-19 (Steiger, Mußgnug, and Kroll 2020) dataset. |
| Dataset Splits | No | The paper mentions using synthetic, MIMIC-III, and COVID-19 datasets and specifies parameters for the synthetic dataset (T=30 max time steps, N=5000 patient trajectories), but it does not provide explicit training, validation, and test split percentages or sample counts. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions software components like 'neural controlled differential equations (CDE)', 'LSTM layer', and 'RNN model', but it does not provide specific version numbers for any software dependencies. |
| Experiment Setup | No | The paper describes the dataset parameters (e.g., T=30, N=5000 for synthetic data) and architectural components (e.g., two stacked LSTM layers, linear FC layer, MSE loss), but it does not provide specific hyperparameter values such as learning rate, batch size, or number of epochs for training. |