How Robust are the Estimated Effects of Nonpharmaceutical Interventions against COVID-19?
Authors: Mrinank Sharma, Sören Mindermann, Jan Brauner, Gavin Leech, Anna Stephenson, Tomáš Gavenčiak, Jan Kulveit, Yee Whye Teh, Leonid Chindelevitch, Yarin Gal
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To address these challenges, we empirically investigate the influence of common assumptions made by NPI effectiveness models. We build on previous state of the art NPI effectiveness models [2, 7] and construct 6 variants that make different structural assumptions. Without access to ground-truth NPI effectiveness estimates, we evaluate models by assessing how well their estimates generalise to unseen countries, and how much their estimates are influenced by unobserved factors. We find that assuming transmission noise yields more robust estimates that also generalise better. Furthermore, we systematically validate all of our models, assessing how sensitive NPI effectiveness estimates are to variations in the input data and assumed epidemiological parameters. |
| Researcher Affiliation | Academia | 1 Department of Statistics, University of Oxford, UK. 2 Department of Engineering Science, University of Oxford, UK. 3 OATML Group, Department of Computer Science, University of Oxford, UK. 4 Department of Computer Science, University of Bristol, UK. 5 School of Engineering and Applied Sciences, Harvard University, USA. 6 Future of Humanity Institute, University of Oxford, UK. 7 MRC Centre for Global Infectious Disease Analysis; and the Abdul Latif Jameel Institute for Disease and Emergency Analytics (J-IDEA), School of Public Health, Imperial College London. |
| Pseudocode | No | The paper describes models using mathematical equations and general approaches but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our model implementations and sensitivity analyses can be found at https://github.com/epidemics/COVIDNPIs/tree/neurips. |
| Open Datasets | Yes | Data & Implementation. We use our previous NPI dataset [2], composed of data on the implementation of 9 NPIs in 41 countries between January and end of May 2020 (validated with independent double entry). Data on reported cases and deaths is from the Johns Hopkins CSSE tracker [19]. |
| Dataset Splits | No | We measure holdout predictive likelihood on a test-set of 6 countries, having tuned hyperparameters by cross-validation. While cross-validation is mentioned, the specific setup (e.g., number of folds, percentages) for the validation split is not provided, making it not reproducible based on the text. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used, such as GPU models, CPU specifications, or cloud computing instance types. |
| Software Dependencies | No | We implement our models in Py MC3 [31], using Hamiltonian Monte Carlo NUTS [15] for inference. While software libraries are mentioned, specific version numbers for Py MC3 or other dependencies are not provided, preventing full reproducibility. |
| Experiment Setup | Yes | We use 4 chains with 1250 samples per chain. For runs with default settings, we ensure that the Gelman-Rubin ˆR is less than 1.05 and that there are no divergent transitions. |