DP-SGD Without Clipping: The Lipschitz Neural Network Way

Authors: Louis Béthune, Thomas Massena, Thibaut Boissin, Aurélien Bellet, Franck Mamalet, Yannick Prudent, Corentin Friedrich, Mathieu Serrurier, David Vigouroux

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our implementation with a speed benchmark against competing approaches, and we present the privacy/utility Pareto fronts that can be obtained with GNP networks.
Researcher Affiliation Collaboration IRIT, Universit  Paul Sabatier, Toulouse. IRT Saint Exup ry, Toulouse. Inria, Universit  de Montpellier.
Pseudocode Yes Algorithm 1 Backpropagation for Bounds(f, X)
Open Source Code Yes The code has been released as a Python package available at https://github.com/Algue-Rythme/lip-dp
Open Datasets Yes We validate the performance of our approach on tabular data from Adbench suite (Han et al., 2022) using a MLP, and report the result in Table 3a. For MNIST (Fig. 3b we use a Lipschitz Le Net-like architecture.
Dataset Splits Yes We use a random stratified split into train (80%) / validation (20%).
Hardware Specification Yes We rely on a machine with 32GB RAM and a NVIDIA Quadro GTX 8000 graphic card with 48GB memory. The GPU uses driver version 495.29.05, cuda 11.5 (October 2021) and cudnn 8.2 (June 7, 2021). We use Python 3.8 environment. Experiments are run on NVIDIA Ge Force RTX 3080 or 3090 GPUs.
Software Dependencies Yes For Jax, we used jax 0.3.17 (Aug 31, 2022) with jaxlib 0.3.15 (July 23, 2022), flax 0.6.0 (Aug 17, 2022) and optax 1.4.0 (Nov 21, 2022). For Tensorflow, we used tensorflow 2.12 (March 22, 2023) with tensorflow_privacy 0.7.3 (September 1, 2021). For Pytorch, we used Opacus 1.4.0 (March 24, 2023) with Pytorch (March 15, 2023). For lip-dp we used deel-lip 1.4.0 (January 10, 2023) on Tensorflow 2.8 (May 23, 2022).
Experiment Setup Yes For the comparisons, we leverage the DP-SGD implementation from Opacus. We perform a search over a broad range of hyper-parameter values: the configuration is given in Appendix D.