Learning Counterfactual Representations for Estimating Individual Dose-Response Curves
Authors: Patrick Schwab, Lorenz Linhardt, Stefan Bauer, Joachim M. Buhmann, Walter Karlen5612-5619
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that the methods developed in this work set a new state-of-the-art in estimating individual dose-response. |
| Researcher Affiliation | Academia | 1Institute of Robotics and Intelligent Systems, 2Department of Computer Science, ETH Zurich, Switzerland 3Max Planck Institute for Intelligent Systems, T ubingen, Germany |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code for this work is available at https://github.com/d909b/drnet. |
| Open Datasets | Yes | News. The News benchmark consisted of 5000 randomly sampled news articles from the NY Times corpus4... MVICU. The MVICU benchmark models patients responses... The data was sourced from the publicly available MIMIC III database (Saeed et al. 2011)... The Cancer Genomic Atlas (TCGA). The TCGA project collected gene expression data from various types of cancers in 9659 individuals (Weinstein et al. 2013). |
| Dataset Splits | Yes | All three datasets were randomly split into training (63%), validation (27%) and test sets (10%). |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using the 'causaldrf package (Galagate 2016)' but does not specify version numbers for this or any other software dependencies such as programming languages, deep learning frameworks, or other libraries. |
| Experiment Setup | Yes | To ensure a fair comparison of the tested models, we took a systematic approach to hyperparameter search. Each model was given exactly the same number of hyperparameter optimisation runs with hyperparameters chosen at random from predefined hyperparameter ranges (Appendix B). We used 5 hyperparameter optimisation runs for each model on TCGA and 10 on all other benchmarks. Furthermore, we used the same random seed for each model, i.e. all models were evaluated on exactly the same sets of hyperparameter configurations. ... For all DRNets and ablations, we used E = 5 dosage strata with the exception of those presented in Figure 2. |