Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification
Authors: Jan Schuchardt, Mihail Stoian, Arthur Kosmala, Stephan Günnemann
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental evaluation demonstrates that our tight mechanism-specific guarantees outperform both tight mechanism-agnostic bounds and classic group privacy results. |
| Researcher Affiliation | Academia | Jan Schuchardt1, Mihail Stoian2 , Arthur Kosmala1 , Stephan Günnemann1 {j.schuchardt, a.kosmala, s.guennemann}@tum.de, mihail.stoian@utn.de 1Dept. of Computer Science & Munich Data Science Institute, Technical University of Munich 2Dept. of Engineering, University of Technology Nuremberg |
| Pseudocode | No | The paper describes algorithms and procedures using prose and mathematical formulations but does not contain any structured pseudocode blocks or figures explicitly labeled 'Algorithm' or 'Pseudocode'. |
| Open Source Code | Yes | An implementation will be made available at https://cs.cit.tum.de/daml/group-amplification. |
| Open Datasets | Yes | We train a convolutional neural network (2 convolution layers with kernel sizes 3 and 32 / 64 channels, followed by two linear layers with hidden dimension 128) for image classification on MNIST (55000 training, 5000 validation, 10000 test samples). |
| Dataset Splits | Yes | We train a convolutional neural network (2 convolution layers with kernel sizes 3 and 32 / 64 channels, followed by two linear layers with hidden dimension 128) for image classification on MNIST (55000 training, 5000 validation, 10000 test samples). |
| Hardware Specification | Yes | We conduct all experiments on a set of Xeon E5-2630 v4 CPUs @ 2.2 GHz. |
| Software Dependencies | Yes | To perform high-precision quadrature for RDP guarantees, we use the tanh-sinh quadrature implementation from the mpmath library (version 1.3.0.). For PLD accounting and evaluation of ADP guarantees via bisection, we use and extend the dp_accounting library [45] (commit 0b109e959470c43e9f177d5411603b70a56cdc7a)...For conversion from RDP to ADP guarantees, we use the get_privacy_spent method implemented in the Opacus library [73] (version 1.4.1). |
| Experiment Setup | Yes | Further details on the experimental setup are provided in Appendix C...We set the gradient clipping norm of DP-SGD [6] to C = 10 4, the Gaussian noise standard deviation to 0.6 C, and the subsampling rate to r = 64 / 55000. The optimizer is ADAM with learning rate 1e 3. |