Universal Exact Compression of Differentially Private Mechanisms
Authors: Yanxiao Liu, Wei-Ning Chen, Ayfer Ozgur, Cheuk Ting Li
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiment results on distributed mean estimation show that PPR consistently gives a better trade-off between communication, accuracy and central differential privacy compared to the coordinate subsampled Gaussian mechanism, while also providing local differential privacy. |
| Researcher Affiliation | Academia | Yanxiao Liu The Chinese University of Hong Kong yanxiaoliu@link.cuhk.edu.hk Wei-Ning Chen Stanford University wnchen@stanford.edu Ayfer Özgür Stanford University aozgur@stanford.edu Cheuk Ting Li The Chinese University of Hong Kong ctli@ie.cuhk.edu.hk |
| Pseudocode | Yes | The algorithm is given in Algorithm 1. ... Algorithm 1 Poisson private representation Procedure PPRENCODE(α, Q, r, r , s) |
| Open Source Code | Yes | 1Our code can be found in https://github.com/cheuktingli/Poisson Private Repr |
| Open Datasets | No | The paper describes how the data was generated synthetically for the experiments: 'each of which is generated according to Xi(j) i.i.d. (2 Ber(0.8) 1), where Ber(0.8) is a Bernoulli random variable with parameter p = 0.8.' It does not refer to or provide access information for a publicly available or open dataset. |
| Dataset Splits | No | The paper does not explicitly provide training/test/validation dataset splits. It describes the generation of simulated data for distributed mean estimation, which is not typically partitioned into these specific splits for evaluation. |
| Hardware Specification | Yes | Experiments were executed on M1 Pro Macbook, 8-core CPU ( 3.2 GHz) with 16GB memory. |
| Software Dependencies | No | The paper mentions providing source code on GitHub, implying Python is used. However, it does not specify version numbers for Python itself or any other software libraries (e.g., NumPy, PyTorch) that would be needed for replication. |
| Experiment Setup | Yes | We use the same setup that has been used in [19]: consider n = 500 clients, and the dimension of local vectors is d = 1000, each of which is generated according to Xi(j) i.i.d. (2 Ber(0.8) 1), where Ber(0.8) is a Bernoulli random variable with parameter p = 0.8. We require (ε, δ)-central DP with δ = 10 6 and ε [0.05, 6] and apply the PPR with α = 2 to simulate the Gaussian mechanism, where the privacy budgets are accounted via Rényi DP. |