Breaking the Communication-Privacy-Accuracy Trilemma

Authors: Wei-Ning Chen, Peter Kairouz, Ayfer Ozgur

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also provide empirical evidence that our scheme requires significantly less communication while achieving the same accuracy and privacy levels as the state-of-the-art approaches. See Section C for more results. In Figure 1, we compare SQKR with a concatenation of separately optimal schemes [17] and [24], showing that under the same privacy and communication constraints, SQKR achieves much smaller estimation errors. More detailed experiments can be found in Section C.
Researcher Affiliation Collaboration Wei-Ning Chen Department of Electrical Engineering Stanford University wnchen@stanford.edu Peter Kairouz Google kairouz@google.com Ayfer Özgür Department of Electrical Engineering Stanford University aozgur@stanford.edu
Pseudocode No The paper describes its algorithms and schemes in prose, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor structured code-like steps.
Open Source Code No The paper does not provide any explicit statements about the release of source code, nor does it include a link to a code repository or mention code in supplementary materials.
Open Datasets No The paper mentions generating data 'from different distributions' for experiments (e.g., 'truncated and normalized geometric distribution with " = 0.8'), but it does not specify or provide access information (links, DOIs, formal citations) for any established public datasets used in the experiments.
Dataset Splits No The paper does not provide specific details regarding training, validation, or test dataset splits. The experimental setup described focuses on generating data from different distributions for performance comparison rather than using predefined dataset splits for model training and evaluation.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU/CPU models, memory specifications, or cloud computing resources.
Software Dependencies No The paper does not provide specific details about ancillary software dependencies, such as library names with version numbers or specific solver versions.
Experiment Setup No The paper does not provide specific details about the experimental setup, such as hyperparameter values, training configurations, or system-level settings typically found in machine learning experiments.