Online Sensitivity Optimization in Differentially Private Learning
Authors: Filippo Galli, Catuscia Palamidessi, Tommaso Cucinotta
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our method is thoroughly assessed against alternative fixed and adaptive strategies across diverse datasets, tasks, model dimensions, and privacy levels. Our results indicate that it performs comparably or better in the evaluated scenarios, given the same privacy requirements. |
| Researcher Affiliation | Academia | 1Scuola Normale Superiore 2INRIA, Palaiseau, France 3 Ecole Polytechnique, Palaiseau, France 4Scuola Superiore Sant Anna, Pisa, Italy |
| Pseudocode | Yes | Algorithm 1: Differentially private optimization with OSO-DPSGD |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | In particular, we explore how online sensitivity optimization can be an effective tool in reducing the privacy and computational costs of running large grid searches. In an effort to draw conclusions that can be as general as possible, we identify three vastly adopted datasets in the literature: MNIST (Le Cun et al. 1998), Fashion MNIST (Xiao, Rasul, and Vollgraf 2017), and AG News (Gulli 2005) (Zhang, Zhao, and Le Cun 2015). |
| Dataset Splits | Yes | we validate each model at training time every 50 iterations on the full test set, and pick the model checkpoint at the best value as representative of the corresponding configuration. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or library versions). |
| Experiment Setup | Yes | Batch Size 512, In all of our experiments, we fix the ranges of the hyperparameters to the same values... C [10 2, 102] for the clipping threshold, ρ [10 2.5, 101.5] for the learning rate and γ [0.1, 0.9] for the target quantile., ρc = ρr = 2.5 10 3 for all the experiments., The initial value for the clipping threshold in both Fixed Quantile and Online is set to C0 = 0.1., Each configuration runs for 10 epochs. |