Learning Perturbations to Explain Time Series Predictions

Authors: Joseph Enguehard

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present our results in Table 1. These results show that, although our method performs slightly lower than some baselines in terms of AUP, it significantly outperforms all other methods by every other metric. ... We present on Tables 4 and 5 results with our method compared with different baselines, computing our metrics by masking 20% of the data...
Researcher Affiliation Industry 1Babylon Health, 1 Knightsbridge Grn, London SW1X 7QA United Kingdom 2Skippr, 99 Milton Keynes Business Centre, Milton Keynes MK14 6GD United Kingdom. Correspondence to: Joseph Enguehard <joseph@skippr.com>.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes An implement of this work can be found at https://github.com/josephenguehard/time_interpret
Open Datasets Yes We perform experiments on two datasets: a synthetic one, generated using a Hidden Markov model, and a real-world one, MIMIC-III (Johnson et al., 2016).
Dataset Splits No The paper describes the datasets used (HMM and MIMIC-III) and mentions training a GRU model, but it does not specify the explicit training, validation, and test dataset splits (e.g., percentages or sample counts) used for reproduction.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, or memory) used to run the experiments.
Software Dependencies No The paper mentions implementing the neural network and GRU models but does not provide specific version numbers for any software dependencies like programming languages (e.g., Python version) or libraries (e.g., PyTorch version).
Experiment Setup Yes We train a one-layer GRU with a hidden size of 200 to predict this in-hospital mortality... We also set λ1 = λ2 = 1 in our experiments, while an ablation study on the choice of these hyperparameters can be found in Section 4 and Appendix A.