Protect Your Score: Contact-Tracing with Differential Privacy Guarantees
Authors: Rob Romijnders, Christos Louizos, Yuki M. Asano, Max Welling
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The algorithm is tested on the two most widely used agent-based COVID19 simulators and demonstrates superior performance in a wide range of settings. |
| Researcher Affiliation | Collaboration | Rob Romijnders1, Christos Louizos2, Yuki M. Asano1, Max Welling1 1University of Amsterdam 2Qualcomm AI research |
| Pseudocode | Yes | Algorithm 1: Differentially private factorized Neighbors |
| Open Source Code | Yes | The code for our method and all experiments is available at github.com/Rob Romijnders/dpfn aaai. |
| Open Datasets | Yes | The Open ABM simulator (Hinch et al. 2021) uses a network-based process to generate contacts, and is calibrated against the UK for different age, household, and occupational networks patterns (school, work, and social network). |
| Dataset Splits | No | The paper uses agent-based simulators (Open ABM and Covasim) which do not involve static dataset splits for training, validation, and testing in the traditional sense. It discusses testing percentages within the simulation itself, but not data partitioning for model training. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions certain software and libraries (e.g., Numba), but it does not provide specific version numbers for key software components or dependencies used in their experimental setup. |
| Experiment Setup | Yes | The default FPR and the FNR are 1% and 0.1% respectively. To test the robustness to noisy tests, we increase these noise rates in an experiment similar to (Romijnders et al. 2023). The FPR increases up to a level of 25% and the FNR up to a level of 3% these are the worst-case design specifications as prescribed by the European centre for disease control during the COVID19 pandemic (ECDC 2021). The δ forms an important parameter in differential privacy as this constitutes the probability of exceeding the ε bound. We set this value to 1/1000 in all experiments. We found a value of B = 10 to work best. Determining the number of Gibbs samples constitutes a topic by itself (Robert and Casella 2004). Our case is even more complex as each additional sample improves the statistical estimate, but simultaneously increases the privacy bound. We find that taking 10 samples with 10 skip steps, after 100 burn-in steps, works best (Robert and Casella 2004); taking more samples would worsen the privacy bound, and taking fewer samples worsens the estimate for the COVIDSCORE. |