Learning discrete distributions: user vs item-level privacy
Authors: Yuhan Liu, Ananda Theertha Suresh, Felix Xinnan X. Yu, Sanjiv Kumar, Michael Riley
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show empirically that the bounds in Theorem 5 should hold by estimating the ℓ1 distance between Bin(m, 0.01) and Bin(m, 0.011). |
| Researcher Affiliation | Collaboration | Yuhan Liu Cornell University yl2976@cornell.edu Ananda Theertha Suresh Google Research theertha@google.com Felix Yu Google Research felixyu@google.com Sanjiv Kumar Google Research sanjivk@google.com Michael Riley Google Research riley@google.com |
| Pseudocode | Yes | Algorithm 1 Private hypothesis selection: PHS(H, D, α, ε); Algorithm 2 Learning binomial distributions: Binom(D, ε, α); Algorithm 3 Dense regime: Dense(D, ε, δ, α); Algorithm 4 Estimation of binomial with small p: Small Binom(D, ε); Algorithm 5 Sparse regime: Sparse(D, ε, δ, α) |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper does not specify the use of any publicly available or open datasets for training or evaluation. The empirical demonstration in Figure 1 is based on synthetic data for illustrating a mathematical bound. |
| Dataset Splits | No | The paper does not describe any specific training, validation, or test dataset splits. The empirical evaluation shown is for a mathematical bound, not a model trained on a dataset. |
| Hardware Specification | No | The paper does not specify any hardware used for the empirical approximation shown in Figure 1, such as GPU/CPU models or cloud resources. |
| Software Dependencies | No | The paper states that the ℓ1 distance is approximated "by samples" for Figure 1, but it does not specify any software names or version numbers used for this approximation. |
| Experiment Setup | No | The paper does not provide specific details about any experimental setup, such as hyperparameter values, training configurations, or system-level settings. The empirical part is a simple approximation for a mathematical bound without detailed setup. |