Communication Complexity in Locally Private Distribution Estimation and Heavy Hitters
Authors: Jayadev Acharya, Ziteng Sun
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct empirical evaluations for the one bit distribution learning algorithm without public randomness proposed in Section 2. We compare the proposed algorithm (onebit) with other algorithms including Randomized Response (RR) (Warner, 1965), RAPPOR (Erlingsson et al., 2014), Hadamard Response (HR) (Acharya et al., 2019) and subset selection (subset) (Ye & Barg, 2018). To obtain samples, we generate synthetic data from various classes of distributions including uniform distribution, geometric distributions with parameter 0.8 and 0.98, Zipf distributions with parameter 1.0 and 0.5 and Two-step distribution. We conduct the experiments for k = 1000 and ε = 1. The results are shown in Figure 1. Each point is the average of 30 independent experiments. |
| Researcher Affiliation | Academia | Jayadev Acharya 1 Ziteng Sun 1 1School of Electrical and Computer Engineering, Cornell University. |
| Pseudocode | Yes | 1. Use an empirical estimate bs for s as bsy := 1 |Sy| j Sy Yj. (14) 2. Motivated by (11) estimate p B as c p B = eε + 1 bs 1K eε + 1 . 3. Estimate for the original distribution using (13) as c p K := H 1 K (2c p B 1K) = 1 K HK (2c p B 1K). 4. Output bp, the projection of the first k coordinates of the K dimensional c p K on the simplex k. |
| Open Source Code | No | The paper does not provide any statement or link regarding the availability of open-source code for the described methodology. |
| Open Datasets | No | The paper mentions generating synthetic data from various distributions ('uniform distribution, geometric distributions with parameter 0.8 and 0.98, Zipf distributions with parameter 1.0 and 0.5 and Two-step distribution'), but it does not provide concrete access information (link, DOI, repository, or formal citation) to these datasets for public access. |
| Dataset Splits | No | The paper states 'Each point is the average of 30 independent experiments' but does not specify any dataset splits (e.g., train/validation/test percentages or counts) or cross-validation methodology. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, memory, or processing units used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks). |
| Experiment Setup | Yes | We conduct the experiments for k = 1000 and ε = 1. The results are shown in Figure 1. Each point is the average of 30 independent experiments. |