Exact Optimality of Communication-Privacy-Utility Tradeoffs in Distributed Mean Estimation

Authors: Berivan Isik, Wei-Ning Chen, Ayfer Ozgur, Tsachy Weissman, Albert No

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically demonstrate the communication-privacy-utility tradeoffs of RRSC and compare it with order-optimal schemes under privacy and communication constraints, namely SQKR [6] and MMRC [30].
Researcher Affiliation Academia Berivan Isik Stanford University berivan.isik@stanford.edu Wei-Ning Chen Stanford University wnchen@stanford.edu Ayfer Ozgur Stanford University aozgur@stanford.edu Tsachy Weissman Stanford University tsachy@stanford.edu Albert No Hongik University albertno@hongik.ac.kr
Pseudocode Yes We call this approach Randomly Rotating Simplex Coding (RRSC) and provide the pseudocode in Algorithm 1.
Open Source Code Yes The codebase for this work is open-sourced at https://github.com/Berivan Isik/rrsc.
Open Datasets No The paper uses synthetically generated data: 'More precisely, for the first half of the users, we set v1, . . . , vn/2 i.i.d N(1, 1) d; and for the second half of the users, we set vn/2+1, . . . , vn i.i.d N(10, 1) d. We further normalize each vi to ensure that they lie on Sd 1.' It does not provide access information for a publicly available dataset.
Dataset Splits No The paper describes the generation of synthetic data but does not specify train/validation/test splits or cross-validation for experimental reproduction.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes To find the optimal values for k and rk, we compute the optimal rk using the formula in (33) for k = 1, . . . , M and pick the k that gives the smallest rk (which corresponds to the bias). To estimate the expectation Ck in (33), we run a Monte Carlo simulation with 1M trials.