Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Releasing Graph Neural Networks with Differential Privacy Guarantees
Authors: Iyiola Emmanuel Olatunji, Thorben Funke, Megha Khosla
TMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We theoretically analyze our approach in the Rรจnyi differential privacy framework. Besides, we show the solid experimental performance of our method compared to several baselines adapted for graph-structured data. |
| Researcher Affiliation | Academia | Iyiola E. Olatunji EMAIL L3S Research Center Germany Thorben Funke EMAIL L3S Research Center, Germany Megha Khosla EMAIL TU Delft, Netherlands |
| Pseudocode | Yes | The pseudocode for our Priv Gnn method is provided in Algorithm 1. Algorithm 1: Priv Gnn |
| Open Source Code | Yes | Our code is available at https://github.com/iyempissy/priv Gnn. |
| Open Datasets | Yes | We perform our experiments on four representative datasets, namely Amazon Shchur et al. (2018), Amazon2M Chiang et al. (2019), Reddit Hamilton et al. (2017) and Facebook Traud et al. (2012). We also perform additional experiments on Ar Xiv Hu et al. (2020) and Credit defaulter graph Agarwal et al. (2021). |
| Dataset Splits | Yes | We select 50% of Amazon, Amazon2M, Reddit and Credit, 40% of Ar Xiv, and 20% of the Facebook dataset as the test set from the public graph. Specifically, a GNN model is trained using the training set (50% of the public data for Amazon, Amazon2M, Reddit, and Credit, 60% of Ar Xiv, and 80% of the Facebook dataset) and their corresponding ground truth labels in a transductive setting. |
| Hardware Specification | Yes | All our experiments were conducted for 10 different instantiations using Py Torch Geometric library Fey & Lenssen (2019) and Python3 on 11GB Ge Force GTX 1080 Ti GPU. |
| Software Dependencies | No | All our experiments were conducted for 10 different instantiations using Py Torch Geometric library Fey & Lenssen (2019) and Python3 on 11GB Ge Force GTX 1080 Ti GPU. The paper mentions 'Py Torch Geometric library' and 'Python3' but does not specify version numbers for these key software components. |
| Experiment Setup | Yes | For the private and public GNN models, we used a two-layer Graph SAGE model Hamilton et al. (2017) with hidden dimension of 64 and Re LU activation function. We applied batch normalization to the output of the first layer. We applied a dropout of 0.5 and a learning rate of 0.01. For the multilayer perceptron model (MLP) used in Pate M, we used three fully connected layers and Re LU activation layers. We trained all models for 200 epochs using the negative log-likelihood loss function for node classification with the Adam optimizer. |