A Neural Framework for Generalized Causal Sensitivity Analysis
Authors: Dennis Frauen, Fergus Imrie, Alicia Curth, Valentyn Melnychuk, Stefan Feuerriegel, Mihaela van der Schaar
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide theoretical guarantees that NEURALCSA can infer valid bounds on the causal query of interest and also demonstrate this empirically using both simulated and real-world data. |
| Researcher Affiliation | Academia | 1 LMU Munich 2 Munich Center for Machine Learning 3 UCLA 4 University of Cambridge 5 Alan Turing Institute |
| Pseudocode | Yes | Algorithm 1: Full learning algorithm for NEURALCSA |
| Open Source Code | Yes | 7 Code is available at https://github.com/Dennis Frauen/Neural CSA. |
| Open Datasets | Yes | We create a semi-synthetic dataset using MIMIC-III (Johnson et al., 2016)... |
| Dataset Splits | Yes | We obtain n = 14719 patients and split the data into train (80%), val (10%), and test (10%). |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are mentioned in the paper. |
| Software Dependencies | No | The paper mentions software components like 'autoregressive neural spline flows' and 'Adam optimizer' but does not specify their version numbers or the version of the programming language used. |
| Experiment Setup | Yes | We choose the number of epochs such that NEURALCSA satisfies the sensitivity constraint for a given sensitivity parameter. Details are in Appendix F. ... Table 4: Hyperparameter tuning details. MODEL TUNABLE PARAMETERS SEARCH RANGE Stage 1 CNF Epochs 50 Batch size 32, 64, 128 Learning rate 0.0005, 0.001, 0.005 Hidden layer size (hyper network) 5, 10, 20, 30 Number of spline bins 2, 4, 8 Propensity network Epochs 30 Batch size 32, 64, 128 Learning rate 0.0005, 0.001, 0.005 Hidden layer size 5, 10, 20, 30 Dropout probability 0, 0.1 |