Permute-and-Flip: A new mechanism for differentially private selection
Authors: Ryan McKenna, Daniel R. Sheldon
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We now perform an empirical analysis of the permute-and-flip mechanism. Our aim is to quantify the utility improvement from permute-and-flip relative to the exponential mechanism for different values of on real-world problem instances. We use five representative data sets from the DPBench study: HEPTH, ADULTFRANK, MEDCOST, SEARCHLOGS, and PATENT [20] and consider the tasks of mode and median selection. |
| Researcher Affiliation | Academia | Ryan Mc Kenna and Daniel Sheldon College of Information and Computer Sciences University of Massachusetts, Amherst Amherst, MA 01002 { rmckenna, sheldon }@cs.umass.edu |
| Pseudocode | Yes | Algorithm 1: Permute-and-Flip Mechanism, MP F ( q); Algorithm 2: MEM( q); Algorithm 3: MP F ( q) |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We use five representative data sets from the DPBench study: HEPTH, ADULTFRANK, MEDCOST, SEARCHLOGS, and PATENT [20] |
| Dataset Splits | No | The paper mentions using specific datasets for evaluation but does not provide details on training, validation, or test splits for these datasets. It focuses on analytical computation of expected error rather than traditional machine learning training with data splits. |
| Hardware Specification | No | The paper does not specify the hardware used for running the experiments (e.g., CPU, GPU models, memory, or specific computing environments). |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, specific libraries). |
| Experiment Setup | No | The paper describes varying the privacy budget epsilon (ε) and notes the use of 1024 bins for discretized domains, but it does not provide specific experimental setup details such as hyperparameters for model training (e.g., learning rate, batch size, optimizer settings) as the experiments are analytical evaluations of mechanisms rather than machine learning model training. |