Privacy Induces Robustness: Information-Computation Gaps and Sparse Mean Estimation

Authors: Kristian Georgiev, Samuel Hopkins

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Now we turn to empirically validating the performance of Algorithm 1. [...] For all figures, we average results over 10 random seeds and report average results, together with 95% bootstrap confidence intervals.
Researcher Affiliation Academia Kristian Georgiev MIT EECS Cambridge, MA 02139 krisgrg@mit.edu Samuel B. Hopkins MIT EECS Cambridge, MA 02139 samhop@mit.edu
Pseudocode Yes Algorithm 1 The subroutine exp-mech refers to the exponential mechanism [MT07], and KV-1D to the univariate sparse mean estimator of [KV17].
Open Source Code Yes Code necessary to reproduce all experiments is available . [footnote] https://github.com/kristian-georgiev/privacy-induces-robustness
Open Datasets No results are shown for Gaussian data 푋1, . . . , 푋푛 풩(휇, 4 퐼) for a 푘-sparse 휇in R푑for 푘= 20, 푑= 1000, 푛= 1000 . The paper uses synthetic data generated from a Gaussian distribution, but does not provide access information (like a URL or citation to a public repository) for a pre-existing dataset.
Dataset Splits No The paper states that experiments were conducted on 'synthetic data' with a specified number of samples (e.g., '1500 samples', '1000 samples'). However, it does not explicitly describe any training, validation, or test data splits used in the experimental setup.
Hardware Specification Yes For all experiments we use commodity hardware (CPU: Intel Core i7-9750H CPU @ 2.60GHz).
Software Dependencies No The paper refers to supplementary material for training details, which might include software dependencies, but it does not explicitly list specific software components with their version numbers within the provided text.
Experiment Setup Yes Empirical evaluation of 0.5-DP algorithms... for support estimation for 1500 samples from 풩(휇, 퐼) in ambient dimension 푑= 1000 with 휇 0 = 20 (non-zero coordinates sampled uniformly from [ 10, 10]) as a function of 푅, the a priori estimate of 휇 2. [...] results are shown for Gaussian data 푋1, . . . , 푋푛 풩(휇, 4 퐼) for a 푘-sparse 휇in R푑for 푘= 20, 푑= 1000, 푛= 1000 .