Extremal Mechanisms for Local Differential Privacy
Authors: Peter Kairouz, Sewoong Oh, Pramod Viswanath
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We introduce a family of extremal privatization mechanisms, which we call staircase mechanisms, and prove that it contains the optimal privatization mechanism that maximizes utility. We further show that for all information theoretic utility functions studied in this paper, maximizing utility is equivalent to solving a linear program, the outcome of which is the optimal staircase mechanism. |
| Researcher Affiliation | Academia | 1Department of Electrical & Computer Engineering 2Department of Industrial & Enterprise Systems Engineering University of Illinois Urbana-Champaign Urbana, IL 61801, USA |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., specific repository link, explicit statement of code release, or mention of code in supplementary materials) for the source code of the methodology described. |
| Open Datasets | No | The paper is theoretical and does not describe the use of specific datasets for training or any other experimental phase. Therefore, no information on dataset availability is provided. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments with dataset splits. No training, validation, or testing splits are discussed. |
| Hardware Specification | No | The paper is theoretical and does not involve computational experiments, thus no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe computational experiments that would require specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not include empirical experiments, thus there are no details provided regarding experimental setup, hyperparameters, or system-level training settings. |