Average-Case Averages: Private Algorithms for Smooth Sensitivity and Mean Estimation

Authors: Mark Bun, Thomas Steinke

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide theoretical and experimental evidence showing that our noise distributions compare favorably to others in the literature, in particular, when applied to the mean estimation problem.
Researcher Affiliation Collaboration Mark Bun Boston University mbun@bu.edu Thomas Steinke IBM Research Almaden smooth@thomas-steinke.net
Pseudocode No The paper describes algorithms in text and mathematical formulas but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper refers to a full version on arXiv [6] for details and proofs, but does not provide an explicit statement about the release of source code or a direct link to a code repository for the methodology described.
Open Datasets Yes Our data is sampled from a standard univariate Gaussian distribution.
Dataset Splits No The paper states that data is sampled from a standard univariate Gaussian distribution and a truncation interval is applied, but it does not specify explicit train, validation, or test dataset splits or percentages.
Hardware Specification No The paper does not specify any particular hardware (e.g., CPU, GPU models, memory, or cloud instances) used for running the experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, that are needed to replicate the experiment.
Experiment Setup Yes The truncation interval is set conservatively to [a, b] = [ 50, 1050] and the data is truncated before applying the trimmed mean... To provide the fairest possible comparison, we pick a ε value (namely, ε = 1 or ε = 0.2) and then compare (ε, 0)-differential privacy with 1/2ε^2-CDP, (1/2ε^2, 10)-t CDP, and (ε, 10^-6)-differential privacy. Each of these is implied by (ε, 0)-differential privacy and the implication is fairly tight, so intuitively provides a roughly similar level of privacy. Aside from the privacy parameters (ε etc.) and the dataset size (n), we show a range of trimming levels m on the horizontal axis. We numerically optimize the smoothing parameter t. We set the distribution shape parameters to appropriate near-optimal values.