Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Going Deeper into Locally Differentially Private Graph Neural Networks

Authors: Longzhu He, Chaozhuo Li, Peng Tang, Sen Su

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets indicate that UPGNET significantly outperforms existing methods in terms of both privacy protection and learning utility. In this section, we conduct a series of experiments to validate the performance of UPGNET and its core components. More experimental results can be found in App. F.
Researcher Affiliation Academia 1Beijing University of Posts and Telecommunication, Beijing, China 2Shandong University, Qingdao, China. Correspondence to: Sen Su <EMAIL>.
Pseudocode Yes Algorithm 1 High-Order Aggregator (HOA) Layer
Open Source Code No The paper does not explicitly state that the source code for the methodology is released or provide a link to a repository. Mentions of other tools are not relevant to their own code release.
Open Datasets Yes We conduct experiments on four representative graph datasets: Cora (Yang et al., 2016), Citeseer (Yang et al., 2016), Last FM (Rozemberczki & Sarkar, 2020), and Facebook (Rozemberczki et al., 2021). These datasets are commonly used in graph machine learning (Wu et al., 2020; Zhang et al., 2020).
Dataset Splits Yes All datasets are randomly divided into 50/25/25% for training, validation, and test sets, respectively.
Hardware Specification No The paper mentions general GPU-based operations in the context of reducing computational overhead for scalability but does not specify any particular GPU models, CPU models, or other hardware specifications used for running their experiments.
Software Dependencies No The paper mentions using graph convolutional networks (GCN), Graph SAGE, and graph attention networks (GAT) as backbone models, and the Adam optimizer. However, it does not provide specific version numbers for any of these software components or underlying libraries (e.g., PyTorch, TensorFlow, Python version).
Experiment Setup Yes All GNN models have two graph convolutional layers, each with a hidden dimension of size 16 and a Se LU activation function (Klambauer et al., 2017) followed by dropout. The GAT model has four attention heads. To obtain the best hyperparameters, we use grid search for selection: both learning rate and weight decay are chosen from {10-4, 10-3, 10-2, 10-1}, and dropout is chosen from {10-4, 10-3, 10-2, 10-1}. The HOA’s step parameter is denoted by K. Based on the selected best hyperparameters, the best K within {0, 2, 4, 8, 16, 32, 64} for all ϵ {0.01, 0.1, 1, 2, 3}. The parameters τ1, τ2 of the NFR layer belong to {0.1, 0.3, 0.5, 0.7, 0.9}. We use the Adam optimizer (Kingma & Ba, 2014) for all models. All models undergo 500 training iterations and the best model is chosen for testing based on validation loss.