Personalized Subgraph Federated Learning
Authors: Jinheon Baek, Wonyong Jeong, Jiongdao Jin, Jaehong Yoon, Sung Ju Hwang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our FED-PUB for its subgraph FL performance on six datasets, considering both non-overlapping and overlapping subgraphs, on which it significantly outperforms relevant baselines. |
| Researcher Affiliation | Academia | 1KAIST. Correspondence to: Jinheon Baek, and Sung Ju Hwang <{jinheon.baek, sjhwang82}@kaist.ac.kr>. |
| Pseudocode | Yes | Algorithm 1 FED-PUB Client Algorithm |
| Open Source Code | Yes | Our code is available at https://github.com/JinheonBaek/FED-PUB. |
| Open Datasets | Yes | Specifically, we use six datasets: Cora, Cite Seer, Pubmed and ogbn-arxiv for citation graphs (Sen et al., 2008; Hu et al., 2020); Computer and Photo for product graphs (Mc Auley et al., 2015; Shchur et al., 2018). |
| Dataset Splits | Yes | For dataset splits, we randomly sample 20% nodes for training, 35% for validation, and 35% for testing, for all datasets except for the arxiv dataset. |
| Hardware Specification | Yes | We use two types of GPUs: Ge Force RTX 2080 Ti and TITAN XP, for training models. |
| Software Dependencies | No | The paper mentions 'Py Torch (Paszke et al., 2019) and Py Torch Geometric (Fey & Lenssen, 2019)' but does not specify their version numbers or any other software dependencies with version numbers. |
| Experiment Setup | Yes | Regarding hyperparameters, the number of hidden dimensions is set to 128, and the learning rate is set to 0.001. All models are optimized with Adam optimizer (Kingma & Ba, 2015). |