Minimax Optimal Estimation of Approximate Differential Privacy on Neighboring Databases

Authors: Xiyang Liu, Sewoong Oh

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present numerical experiments supporting our theoretical predictions in Section 3. Figure 1 (a) illustrates the Mean Square Error (MSE) for estimating dε(P%Q) between uniform distribution P and Zipf distribution Q... We demonstrate how we can use Algorithm 2 to detect mechanisms with false claim of DP guarantees on four types of mechanisms
Researcher Affiliation Academia Xiyang Liu Sewoong Oh Allen School of Computer Science and Engineering, University of Washington {xiyangl, sewoong}@cs.washington.edu
Pseudocode Yes Algorithm 1 Differential Privacy (DP) estimator with known P ... Algorithm 2 Differential Privacy (DP) estimator
Open Source Code Yes We present the experiment details in Appendix B and the code to reproduce our experiments at https://github.com/xiyangl3/adp-estimator.
Open Datasets No The paper mentions using 'uniform distribution P and Zipf distribution Q' (synthetic data) and 'real-world experiments' but does not provide any concrete access information (link, DOI, citation with author/year) for these or any other publicly available datasets.
Dataset Splits No The paper does not explicitly provide details about training/validation/test dataset splits, such as percentages, absolute counts, or references to predefined splits for model training.
Hardware Specification Yes All experiments were performed on a machine with Intel Core i7-4770K CPU (3.50GHz) and 32GB RAM.
Software Dependencies No The paper mentions 'Python 3.6' with a version but does not provide version numbers for other key software components like 'numpy', thus not fulfilling the requirement of multiple versioned components or all mentioned components being versioned.
Experiment Setup No The paper does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, number of epochs) or optimizer settings. It refers to Appendix B.2 for settings, but this appendix discusses the mechanisms rather than detailed training configurations.