Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Virtual Nodes Can Help: Tackling Distribution Shifts in Federated Graph Learning
Authors: Xingbo Fu, Zihan Chen, Yinhan He, Song Wang, Binchi Zhang, Chen Chen, Jundong Li
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments on four datasets under five settings demonstrate the superiority of our proposed Fed VN over nine baselines. ... Table 1: Performance of Fed VN and other baselines over four datasets under five settings. ... Figure 2: Convergence curves of Fed VN and other baselines on CMNIST/Color and SST2/Length. |
| Researcher Affiliation | Academia | University of Virginia EMAIL |
| Pseudocode | No | The overall algorithm of Fed VN can be found in our technical appendix. |
| Open Source Code | Yes | Code https://github.com/xbfu/Fed VN |
| Open Datasets | Yes | Datasets. We adopt graph datasets in (Gui et al. 2022) to simulate distributed graph data in multiple clients. Specifically, we use four datasets, including Motif, CMNIST, ZINC, and SST2. |
| Dataset Splits | No | We split each graph dataset into multiple clients according to its environment settings so that every client has local graphs from one environment. More details about these datasets can be found in our technical appendix. |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models, memory, or processor types) are mentioned in the paper. |
| Software Dependencies | No | The paper mentions 'SGD as the optimizer' but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | Each repetition runs 100 epochs. The local epoch is set to 1, and the batch size is 32. The hidden size of the GNN model and the edge predictor is set to 100. We use SGD as the optimizer for local updates with a learning rate set to 0.01 for CMNIST and 0.001 for others. The temperature τ is set to 0.1. |