The Saddle-Point Method in Differential Privacy
Authors: Wael Alghamdi, Juan Felipe Gomez, Shahab Asoodeh, Flavio Calmon, Oliver Kosut, Lalitha Sankar
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In practice, we demonstrate through numerical experiments that the SPA provides a precise approximation of privacy guarantees competitive with purely numerical-based methods (such as FFT-based accountants), while enjoying closed-form mathematical expressions. |
| Researcher Affiliation | Academia | 1School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts, USA 2Department of Computing and Software, Mc Master University, Hamilton, Ontario, Canada 3School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, Arizona, USA. |
| Pseudocode | Yes | See Appendix L for the SPA-MSD pseudocode, ... Algorithm 1 : SADDLEPOINTACCOUNTANT (SPA) is presented here. |
| Open Source Code | Yes | We provide a Python implementation of the proposed SPA at https://github.com/Felipe-Gomez/saddlepoint accountant |
| Open Datasets | No | The paper primarily discusses the composition of privacy mechanisms (e.g., 'subsampled Gaussian mechanisms', 'Laplace mechanism') and parameters related to them. While CIFAR-10 is mentioned in Appendix N, it is in the context of parameters used by 'a real-world application of DP on the image classification SGD algorithm in (De et al., 2022)', not as a dataset used in *this paper's* main experiments, and no access information (citation with author/year, link) is provided for it. |
| Dataset Splits | No | The paper focuses on privacy mechanisms and their composition, not on machine learning model training with dataset splits like training, validation, or test sets. |
| Hardware Specification | No | This ground-truth curve was computed on a 64-core cluster using multi-processing to distribute the workload, and took a wall-time of 45 minutes. This mentions a '64-core cluster' but does not specify any particular CPU or GPU models, memory, or specific cloud instance types. |
| Software Dependencies | No | The paper mentions a 'Python implementation' in a footnote, but does not specify the version of Python or any other software dependencies with their version numbers (e.g., specific libraries like NumPy, PyTorch). |
| Experiment Setup | Yes | Figure 1. Accounting for the composition of 3000 subsampled Gaussian mechanisms, with noise scale σ = 2 and subsampling rate λ = 0.01. The remaining FFT discretization parameters are set4 to εerror = 0.07, δerror = 10 10 for the PRV Accountant (Gopi et al., 2021), and discretization interval length of 2 10 4 for Connect the Dots (Doroshenko et al., 2022). |