FedSSP: Federated Graph Learning with Spectral Knowledge and Personalized Preference
Authors: Zihan Tan, Guancheng Wan, Wenke Huang, Mang Ye
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform extensive experiments on cross-dataset and cross-domain settings to demonstrate the superiority of our framework. Furthermore, We perform extensive experiments on cross-dataset and cross-domain settings to demonstrate the superiority of our framework. |
| Researcher Affiliation | Academia | 1 National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence, Hubei Key Laboratory of Multimedia and Network Communication Engineering, School of Computer Science, Wuhan University, Wuhan, China. 2 Taikang Center for Life and Medical Sciences, Wuhan University, Wuhan, China |
| Pseudocode | No | The paper provides architectural diagrams (Figure 2) and mathematical formulations, but no explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/Oakley Tan/Fed SSP. |
| Open Datasets | Yes | Follow the settings in [61], we use 15 public graph classification datasets from four different domains, including Small Molecules (MUTAG, BZR, COX2, DHFR, PTC_MR, AIDS, NCI1), Bioinformatics (PROTEIN, OHSU, Peking_1), Social Networks(IMDB-BINARY, IMDBMULTI), and Computer Vision (Letter-low, Letter-high, Letter-med) [52]. |
| Dataset Splits | Yes | For each setting, every client holds its unique graph dataset, among which 10% are held out for testing, 10% for validation, and 80% for training. |
| Hardware Specification | Yes | The experiments are conducted using NVIDIA Ge Force RTX 3090 GPUs as the hardware platform, coupled with Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz. |
| Software Dependencies | No | The paper mentions leveraging the Adam W optimizer but does not specify specific version numbers for software libraries (e.g., PyTorch, TensorFlow) or other key software components used for the experiments. |
| Experiment Setup | Yes | We leverage the Adam W optimizer [31] for local GNNs with learning rate 0.001, the default parameter of ϵ = 1e 8, and (β1, β2) = (0.99, 0.999), as suggested by [54, 85]. The number of communication rounds is 200 for all FL methods. For results in Tab. 1, we set up 4 heads for the multi-head attention mechanism while 128 for the hidden dimension. |