Frequentist Uncertainty in Recurrent Neural Networks via Blockwise Influence Functions
Authors: Ahmed Alaa, Mihaela Van Der Schaar
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using data from a critical care setting, we demonstrate the utility of uncertainty quantification in sequential decision-making. |
| Researcher Affiliation | Academia | 1University of California, Los Angeles 2Cambridge University. |
| Pseudocode | Yes | Algorithm 1 BJ-based uncertainty estimation in RNNs |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-sourcing the code. |
| Open Datasets | Yes | We conducted experiments on data from the Medical Information Mart for Intensive Care (MIMIC-III) (Johnson et al., 2016) database |
| Dataset Splits | No | We generated 1000 training sequences using the model in (9) with T = 10 time steps and trained a vanilla RNN model to predict the label yt on the basis of the input sequence x1:t. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments. |
| Software Dependencies | No | The paper discusses various RNN architectures (simple RNN, LSTMs, GRUs) and models but does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | To ensure a fair comparison, we fix the hyperparameters of the underlying RNNs in all baselines to the following: Number of layers: 1, Number of hidden units: 20, Learning rate: 0.01, Batch size: 150, Number of epochs: 10, Number of steps: 1000. |