On provable privacy vulnerabilities of graph representations
Authors: Ruofan Wu, Guanhua Fang, Mingyang Zhang, Qiying Pan, Tengfei LIU, Weiqiang Wang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Moreover, we present empirical corroboration indicating that such attacks can (almost) perfectly reconstruct sparse graphs as graph size increases. and In this section, comprehensive empirical studies are conducted to evaluate the effectiveness of SERA against both non-private and private node representations. |
| Researcher Affiliation | Collaboration | Ant Group Fudan University Shanghai Jiao Tong University |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Code available at https://github.com/Rorschach1989/gnn_privacy_attack |
| Open Datasets | Yes | The analysis comprises the well-known Planetoid datasets [41], which are distinguished by their high homophily; the heterophilic datasets Squirrel, Chameleon, and Actor [29]... and two larger-scale datasets, namely Amazon-Products [42] and Reddit [14]. and All datasets used throughout experiments are publicly available. |
| Dataset Splits | Yes | We consider a transductive node classification setting and use the standard train-test splits. |
| Hardware Specification | Yes | All experiments are done on a single NVIDIA A100 GPU (with 80GB memory). |
| Software Dependencies | No | The paper mentions software like 'Py Torch [28] and Py Torch Geometric [13]' but does not provide specific version numbers for these components. |
| Experiment Setup | Yes | Across all the experiments, we fix the GNN model to be of depth 2 and use full-batch training for 1000 steps(epochs) using the Adam optimizer with a learning rate of 0.001. and We vary the feature dimension d {2j, 2 j 11} and network depth 1 L 10 in order to obtain a fine-grained assessment of SERA. |