Partial Counterfactual Identification of Continuous Outcomes with a Curvature Sensitivity Model
Authors: Valentyn Melnychuk, Dennis Frauen, Stefan Feuerriegel
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate the effectiveness of our Augmented Pseudo-Invertible Decoder. To the best of our knowledge, ours is the first partial identification model for Markovian structural causal models with continuous outcomes. We empirically demonstrate the effectiveness of our Augmented Pseudo-Invertible Decoder. Finally, we demonstrate its effectiveness across several numerical experiments. |
| Researcher Affiliation | Academia | Valentyn Melnychuk, Dennis Frauen & Stefan Feuerriegel LMU Munich & Munich Center for Machine Learning (MCML) Munich, Germany melnychuk@lmu.de |
| Pseudocode | Yes | Algorithm 1 Training algorithm of APID |
| Open Source Code | Yes | Code is available at https://github.com/Valentyn1997/CSM-APID. |
| Open Datasets | Yes | To show the effectiveness of our APID at partial counterfactual identification, we use two synthetic datasets. We use multi-country data from [7]. It contains weekly-averaged number of recorded COVID-19 for 20 Western countries. The data is available at https://github.com/nbanho/npi_effectiveness_first_wave/blob/master/data/data_preprocessed.csv. |
| Dataset Splits | No | The paper specifies the number of observations drawn (e.g., 'na = 1,000 observations... so that n = 2,000') and filters applied, but it does not provide explicit training, validation, and test dataset splits or a cross-validation setup. |
| Hardware Specification | Yes | Experiments were carried out on 2 GPUs (NVIDIA A100-PCIE-40GB) with Intel Xeon Silver 4316 CPUs @ 2.30GHz. |
| Software Dependencies | No | The paper mentions the use of 'Adam optimizer' and 'deep learning libraries' but does not specify version numbers for programming languages (e.g., Python) or other key software libraries (e.g., PyTorch, TensorFlow). |
| Experiment Setup | Yes | We use the Adam optimizer [71] with a learning rate η = 0.01 and a minibatch size of b = 32 to fit our APID. [...] For the residual normalizing flows, we use t = 15 residual transformations, each with ht = 5 units of the hidden layers. We set the relative and absolute tolerance of the fixed point iterations to 0.0001 and a maximal number of iterations to 200. For variational augmentations, we set the number of units in the hidden layer to hg = 5, and the variance of the augmentation to ε2 = 0.52. For the noise regularization, we chose σ2 = 0.0012. |