Vertically Federated Graph Neural Network for Privacy-Preserving Node Classification

Authors: Chaochao Chen, Jun Zhou, Longfei Zheng, Huiwen Wu, Lingjuan Lyu, Jia Wu, Bingzhe Wu, Ziqi Liu, Li Wang, Xiaolin Zheng

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on three benchmarks and the results demonstrate the effectiveness of VFGNN.
Researcher Affiliation Collaboration 1College of Computer Science and Technology, Zhejiang University, Hangzhou, China 2Ant Group, Hangzhou, China 3Sony AI, Tokyo, Japan 4Macquarie University, Sydney, NSW 2109, Australia 5Peking University, Beijing, China 6JZTData Technology, Hangzhou, China
Pseudocode Yes Algorithm 1 Information publishing mechanisms of data holders to server using differential privacy" and "Algorithm 2 Privacy-preserving Graph SAGE for node label prediction (forward propagation)
Open Source Code No The paper does not provide any explicit statement or link regarding the public availability of the source code for the described methodology.
Open Datasets Yes We use four benchmark datasets, i.e., Cora, Pubmed, Citeseer [Sen et al., 2008], and ar Xiv [Hu et al., 2020].
Dataset Splits Yes We use exactly the same dataset partition of training, validate, and test following the prior work [Kipf and Welling, 2016; Hu et al., 2020]." and "We tune parameters based on the validate dataset and evaluate model performance on the test dataset.
Hardware Specification No The paper states 'where we use local area network' when discussing running time, but it does not provide any specific hardware details such as GPU/CPU models, processors, or memory specifications used for the experiments.
Software Dependencies No The paper mentions using 'Tan H' and 'Sigmoid' as activation functions and describes network structure, but it does not specify any software libraries or frameworks with their version numbers (e.g., TensorFlow, PyTorch, Scikit-learn with their versions).
Experiment Setup Yes For all the models, we use Tan H as the active function of neighbor propagation, and Sigmoid as the active function of hidden layers. For the deep neural network on server, we set the dropout rate to 0.5 and network structure as (d, d, |C|), where d {32, 64, 128} is the dimension of node embeddings and |C| the nubmer of classes. We vary ϵ {1, 2, 4, 8, 16, 32, 64, }, set δ = 1e 4 and the clip value C = 1 to study the effects of differential privacy on our model. We vary the propagation depth K {2, 3, 4, 5}, L2 regularization in {10 2 10 4}, and learning rate in {10 2 10 3}.