Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness

Authors: NhatHai Phan, Minh N. Vu, Yang Liu, Ruoming Jin, Dejing Dou, Xintao Wu, My T. Thai

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of differentially private deep neural networks, compared with baseline approaches, under a variety of model attacks. Rigorous experiments conducted on MNIST and CIFAR-10 datasets [Lecun et al., 1998; Krizhevsky and Hinton, 2009] show that our approach significantly improves the robustness of DP deep neural networks, compared with baseline approaches.
Researcher Affiliation Academia 1New Jersey Institute of Technology, Newark, New Jersey, USA 2Kent State University, Kent, Ohio, USA 3University of Oregon, Eugene, Oregon, USA 4University of Arkansas, Fayetteville, Arkansas, USA 5University of Florida, Gainesville, Florida, USA
Pseudocode Yes Algorithm 1 (Appendix A1) outlines the key steps in our Secure-SGD algorithm.
Open Source Code Yes The implementation of our mechanism is available in Tensor Flow2. 2https://github.com/haiphan NJIT/Secure SGD
Open Datasets Yes Rigorous experiments conducted on MNIST and CIFAR-10 datasets [Lecun et al., 1998; Krizhevsky and Hinton, 2009]
Dataset Splits No The paper uses MNIST and CIFAR-10 datasets but does not explicitly state the train/validation/test splits (e.g., percentages or counts). It only mentions model architecture and some hyperparameters.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, processors, or memory used for running the experiments.
Software Dependencies No The paper mentions 'Tensor Flow' but does not provide specific version numbers for TensorFlow or any other software dependencies.
Experiment Setup Yes MNIST: We used two convolution layers (32 and 64 features). Each hidden neuron connects with a 5x5 unit patch. A fully-connected layer has 256 units. The batch size m was set to 128, ξ = 1.5, ψ = 2, Tµ = 10, and β = 1. CIFAR-10: We used three convolution layers (128, 128, and 256 features). Each hidden neuron connects with a 3x3 unit patch in the first layer, and a 5x5 unit patch in other layers. One fully-connected layer has 256 neurons. The batch size m was set to 128, ξ = 1.5, ψ = 10, Tµ = 3, and β = 1.