Federated Learning on Non-IID Graphs via Structural Knowledge Sharing
Authors: Yue Tan, Yixin Liu, Guodong Long, Jing Jiang, Qinghua Lu, Chengqi Zhang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform extensive experiments over both cross-dataset and cross-domain non IID FGL settings, demonstrating the superiority of Fed Star. |
| Researcher Affiliation | Academia | 1 Australian Artificial Intelligence Institute, University of Technology Sydney, Australia 2 Monash University, Australia 3 Data61, CSIRO, Australia |
| Pseudocode | No | The paper describes its methods through text and diagrams (Figure 2), but does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | The code of Fed Star is available at https://github.com/yuetan031/Fed Star. |
| Open Datasets | Yes | Following the settings in (Xie et al. 2021), we use 16 public graph classification datasets from four different domains, including Small Molecules (MUTAG, BZR, COX2, DHFR, PTC MR, AIDS, NCI1), Bioinformatics (ENZYMES, DD, PROTEINS), Social Networks (COLLAB, IMDB-BINARY, IMDB-MULTI), and Computer Vision (Letter-low, Letter-high, Letter-med) (Morris et al. 2020). |
| Dataset Splits | Yes | In each of the settings, a client owns one of the corresponding datasets and randomly splits it into three parts: 80% for training, 10% for validation, and 10% for testing. |
| Hardware Specification | Yes | We implement all the methods using Py Torch and conduct all experiments on one NVIDIA Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify a version number for this or any other software dependency. |
| Experiment Setup | Yes | We use a three-layer GCN (Kipf and Welling 2017) as the structure encoder and a three-layer GIN (Xu et al. 2019) as the feature encoder, both with the hidden size of 64. The dimension of DSE and RWSE, denoted as k1 and k2, are both set to be 16. The local epoch number and batch size are 1 and 128, respectively. We use an Adam (Kingma and Ba 2014) optimizer with weight decay 5e-4 and learning rate 0.001. The number of communication rounds is 200 for all FL methods. |