Optimal Algorithms for Mean Estimation under Local Differential Privacy

Authors: Hilal Asi, Vitaly Feldman, Kunal Talwar

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct several experiments that demonstrate that the error of both algorithms is nearly the same as we increase the dimension. We plot the ratio of the error of Priv Unit G and Priv Unit (for the same p and γ) for different epsilons and dimensions in Figure 1. These plots reaffirm the theoretical results of Theorem 4.2, that is, the ratio is smaller for large d and small ε.
Researcher Affiliation Collaboration 1Stanford University, part of this work performed while interning at Apple 2Apple, USA.
Pseudocode Yes We provide full details in Algorithm 1. ... We present the full details including the normalization constants in Algorithm 2.
Open Source Code No The paper does not contain any statement or link indicating that the source code for the methodology described is publicly available.
Open Datasets No The paper does not mention using a specific named public dataset or provide access information for any dataset used in its analysis or experiments. It refers to 'n users each with a vector vi in the Euclidean unit ball in Rd' as a general problem setting rather than a specific dataset.
Dataset Splits No The paper does not describe specific training, validation, or test dataset splits. The 'experiments' mentioned are numerical evaluations of algorithm properties, not typical machine learning model training on a dataset with splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the numerical evaluations or experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup No While the paper discusses finding 'optimized parameters' for Priv Unit and Priv Unit G, it does not explicitly state the specific parameter values (p, gamma, q) used for the numerical experiments shown in figures, nor does it provide other typical experimental setup details such as learning rates or training schedules.