Tracking Disease Outbreaks from Sparse Data with Bayesian Inference
Authors: Bryan Wilder, Michael Mina, Milind Tambe4883-4891
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results for an example motivated by COVID-19 show that our method produces an accurate and well-calibrated posterior, while standard methods for estimating the reproduction number can fail badly.Extensive experiments show that our method recovers an accurate and well-calibrated posterior distribution in challenging situations where previous methods fail. |
| Researcher Affiliation | Academia | Bryan Wilder,1 Michael Mina2, Milind Tambe1 1 John A. Paulson School of Engineering and Applied Sciences, Harvard University 2 T.H. Chan School of Public Health, Harvard University bwilder@g.harvard.edu, mmina@hsph.harvard.edu, milind_tambe@harvard.edu |
| Pseudocode | No | The paper does not include pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | No | The paper describes generating synthetic data for its experiments but does not provide concrete access information (e.g., a link or citation for a public repository) for this data. It mentions using 'previously estimated distributions for D (Kucirka et al. 2020; Iyer et al. 2020)' which are parameters for the simulation, not the dataset itself. |
| Dataset Splits | No | The paper describes testing methods on simulated data under various settings (e.g., 'outbreak setting', 'random trend setting', different observation models, sample sizes) but does not specify explicit train/validation/test dataset splits with percentages or sample counts. The term 'validation' in the paper refers to the evaluation process itself rather than a specific dataset split. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software components, libraries, or solvers used in the experiments. |
| Experiment Setup | Yes | We test the performance GPRt vs standard baselines on a wide variety of settings. We choose three baselines which have been recommended by leading epidemiologists as methods of choice for COVID-19 (Gostic et al. 2020). First is the Wallinga-Teunis (WT) method (Wallinga and Teunis 2004)... Second is the method of Cori et al. (Cori) (Thompson et al. 2019; Cori et al. 2013)... Third is Epi Now (Abbott et al. 2020)...We include two different settings for the ground truth Rt. First, the outbreak setting... Second, the random trend setting...We include both PCR and serological tests, using previously estimated distributions for D (Kucirka et al. 2020; Iyer et al. 2020). We also include three sampling models introduced earlier: uniform underreporting, cross-sectional, and longitudinal. Finally, for each of the four combinations of tests and sampling method, we include four different sample sizes.For longitudinal testing we use d = 14; results for other values are very similar (see appendix). |