Secure Deep Graph Generation with Link Differential Privacy

Authors: Carl Yang, Haonan Wang, Ke Zhang, Liang Chen, Lichao Sun

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on two real-world network datasets show that our proposed DPGGAN model is able to generate graphs with effectively preserved global structure and rigorously protected individual link privacy.
Researcher Affiliation Academia Carl Yang1 , Haonan Wang2 , Ke Zhang3 , Liang Chen4 , Lichao Sun5 1Emory University 2University of Illinois at Urbana Champaign 3University of Hong Kong 4Sun Yat-sen University 5Lehigh University
Pseudocode Yes Algorithm 1 DPGGAN
Open Source Code Yes All code and data are in the Supplementary Materials accompanying the submission.
Open Datasets Yes To provide a side-to-side comparison between the original networks and generated networks, we use two standard datasets of real-world networks, i.e., DBLP, and IMDB.
Dataset Splits No The paper mentions training models but does not provide specific training/validation/test dataset splits (e.g., percentages or sample counts) for its main experiments. It describes a sampling strategy for batch size and a specific setup for link prediction evaluation but not overall dataset splits for reproduction.
Hardware Specification Yes All experiments are done with four Ge Force GTX 1080 GPUs and a 12-core 2.2GHz CPU.
Software Dependencies No The paper does not provide specific version numbers for ancillary software dependencies (e.g., Python, PyTorch, TensorFlow versions or specific libraries with their versions).
Experiment Setup Yes For GVAE and our models, we use two-layer GCNs with sizes 32 16 for both gµ and gσ of the encoder network, where the first layer is shared. We use two-layer FNNs with sizes 16 32 for f of the decoder (generator) network. For DPGGAN, we use another two-layer GCN with the same sizes for g and a three-layer FNN with sizes 16 32 1 for f . For DP-related hyper-parameters, we follow existing works [Abadi et al., 2016; Shokri and Shmatikov, 2015] to fix δ to 10-5, σ to 5, and q to 0.01 (which determines the batch size B as B = q N with N as the graph size). ... we empirically set the clipping hyper-parameter C to 5, decay ratio γ to 0.99, learning rate η to 10-3, and the loss weighing hyper-parameters λ1 and λ2 both to 0.1.