Unified Enhancement of Privacy Bounds for Mixture Mechanisms via $f$-Differential Privacy

Authors: Chendi Wang, Buxin Su, Jiayuan Ye, Reza Shokri, Weijie Su

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our numerical computations of the trade-off function indicate that random initialization can enhance the privacy of DP-GD. Our analysis of f-DP guarantees for these mixture mechanisms relies on an inequality for trade-off functions introduced in this paper. This inequality implies the joint convexity of F-divergences. Finally, we study an f-DP analog of the advanced joint convexity of the hockey-stick divergence related to ( , δ)-DP and apply it to analyze the privacy of mixture mechanisms. To the best of our understanding, the leading privacy analysis for shuffled mechanisms is given in [23]. In this section, we compare the privacy bounds from our Theorem 4.2 and Corollary 4.3 with those found in Theorem 3.2 of [23]. Additionally, we assess the tightness of our bound against the empirical lower bounds obtained through binary search. Specifically, Figure 1 presents a comparison of the trade-off function derived from our Theorem 4.2 to that of [23]. This comparison clearly illustrates that f-DP offers tighter privacy bounds, given that its trade-off function aligns closer to the identity trade-off function. In our Table 1, we compare the values of δf-DP( ), as derived from Corollary 4.3 with δ( ) in [23]. The results indicate that δf-DP( ) is significantly smaller than δ( ). In Table 2, we present f-DP alongside the numerical upper bound of from [23] and the numerical lower bound determined by binary search.
Researcher Affiliation Academia Chendi Wang University of Pennsylvania Philadelphia, PA, USA, 19104 & Shenzhen Research Institute of Big data Shenzhen, Guangdong, China, 518000 chendi@wharton.upenn.edu Buxin Su Department of Mathematics University of Pennsylvania Philadelphia, PA, USA, 19104 subuxin@sas.upenn.edu Jiayuan Ye Department of Computer Science National University of Singapore Singapore jiayuan@comp.nus.edu.sg Reza Shokri Department of Computer Science National University of Singapore Singapore reza@comp.nus.edu.sg Weijie J. Su Wharton Statistics and Data Science Department University of Pennsylvania Philadelphia, PA, USA, 19104 suw@wharton.upenn.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about open-sourcing code for the methodology described, nor does it include links to a code repository.
Open Datasets No The paper discusses theoretical analysis and numerical computations for privacy bounds. It describes considering an 'example D0 = {(xi, yi)}n i=1 with yi = axi and x2 i = 1 for some constant a' and 'D1 by removing an arbitrary element in D0'. This refers to a synthetic setup for numerical evaluation rather than an external, publicly available dataset with concrete access information.
Dataset Splits No The paper performs numerical evaluations of privacy bounds and trade-off functions rather than training machine learning models on datasets with explicit splits. Therefore, it does not provide training/test/validation dataset splits.
Hardware Specification No The paper does not mention any specific hardware used for its numerical computations or experiments.
Software Dependencies No The paper does not mention any specific software dependencies with version numbers used for its numerical computations or experiments.
Experiment Setup No The paper focuses on theoretical analysis and numerical evaluation of trade-off functions for privacy. While it describes the setup for numerical evaluation (e.g., 'consider an example D0 = {(xi, yi)}n i=1 with yi = axi and x2 i = 1 for some constant a... assume that σ = 1'), this does not constitute explicit experimental setup details like hyperparameters, optimizers, or training schedules for a machine learning model.