Subset-Based Instance Optimality in Private Estimation

Authors: Travis Dick, Alex Kulesza, Ziteng Sun, Ananda Theertha Suresh

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For means in particular, we provide a detailed analysis and show that our algorithm simultaneously matches or exceeds the asymptotic performance of existing algorithms under a range of distributional assumptions. Table 1. Results for statistical mean estimation. R(A, p) is the expected absolute error of A given n i.i.d. samples from p (Equation (3)). Mk(p) denotes the kth absolute central moment of p (Definition 5.2). σG(p) denotes the best sub-Gaussian proxy of p.
Researcher Affiliation Industry 1Google Research, New York. Correspondence to: Ziteng Sun <zitengsun@google.com>.
Pseudocode Yes Algorithm 1 Bounded mean estimation, Algorithm 2 Threshold estimation, Algorithm 3 Subset optimal mean estimation, Algorithm 4 Subset-optimal monotone property estimation, Algorithm 5 Inverse Sensitivity Mechanism
Open Source Code No The paper does not provide any explicit statements about releasing code or links to a code repository.
Open Datasets No The paper discusses datasets in a theoretical context (e.g., "a multiset D of points from the domain [ R, R]"), but it does not mention or provide access to any named public datasets, nor does it provide links, DOIs, or citations for any specific dataset used for empirical evaluation.
Dataset Splits No The paper does not provide specific training/validation/test dataset splits. It focuses on theoretical analysis and algorithm design rather than empirical evaluation on specific datasets with defined splits.
Hardware Specification No The paper does not provide any specific details about the hardware used for running experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup No The paper describes parameters for its algorithms (e.g., ε, α, β, r) which are theoretical inputs, but it does not provide concrete experimental setup details such as hyperparameters, training configurations, or system-level settings typically found in empirical machine learning papers.