Differentially Private Decoupled Graph Convolutions for Multigranular Topology Protection

Authors: Eli Chien, Wei-Ning Chen, Chao Pan, Pan Li, Ayfer Ozgur, Olgica Milenkovic

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate our approach, we conducted extensive experiments on seven node classification benchmarking and illustrative synthetic datasets. The results demonstrate that DPDGCs significantly outperform existing DP-GNNs in terms of privacy-utility trade-offs.
Researcher Affiliation Academia Eli Chien UIUC & Ga Tech ichien3@illinois.edu ichien6@gatech.edu Wei-Ning Chen Stanford University wnchen@stanford.edu Chao Pan UIUC chaopan2@illinois.edu Pan Li Ga Tech panli@gatech.edu Ayfer Özgür Stanford University aozgur@stanford.edu Olgica Milenkovic UIUC milenkov@illinois.edu
Pseudocode Yes Appendix L: Pseudocode for GAP and DPDGC
Open Source Code Yes Our code is publicly available2. 2https://github.com/thupchnsky/dp-gnn
Open Datasets Yes We test 7 benchmark datasets available from either Pytorch Geometric library [42] or prior works. These datasets include the social network Facebook [43], citation networks Cora and Pubmed [44,45], Amazon co-purchase networks Photo and Computers [46], and Wikipedia networks Squirrel and Chameleon [47].
Dataset Splits No The paper mentions using benchmark datasets and discusses training and testing, but it does not explicitly provide details about specific training, validation, and test dataset splits (e.g., percentages, sample counts, or references to predefined splits).
Hardware Specification Yes All experiments are performed on a Linux Machine with 48 cores, 376GB of RAM, and an NVIDIA Tesla P100 GPU with 12GB of GPU memory.
Software Dependencies No The paper mentions key software components such as 'Py Torch Geometric', 'autodp', and 'Opacus' but does not provide specific version numbers for these dependencies.
Experiment Setup Yes For all methods, we set the hidden dimension to 64, and use Se LU [54] as the nonlinear activation function. The learning rate is set to 10 3, and do not decay the weights. Training involves 100 epochs for both pretraining and classifier modules. We use a dropout 0.5 for nonprivate and edge GDP experiments and no dropout for the node GDP and k-neighbor GDP experiments.