FedHGN: A Federated Framework for Heterogeneous Graph Neural Networks
Authors: Xinyu Fu, Irwin King
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on benchmark datasets with varying numbers of clients for node classification. Results show that Fed HGN consistently outperforms local training and conventional FL methods. |
| Researcher Affiliation | Academia | Xinyu Fu , Irwin King The Chinese University of Hong Kong, Hong Kong, China {xyfu, king}@cse.cuhk.edu.hk |
| Pseudocode | Yes | Algorithm 1 Fed HGN framework. |
| Open Source Code | Yes | The code is available at https://github.com/cynricfu/Fed HGN. |
| Open Datasets | Yes | We select widely-adopted heterogeneous graph datasets for our node classification experiments: AIFB, MUTAG, and BGS preprocessed by Deep Graph Library (DGL) [Wang et al., 2019a]. |
| Dataset Splits | Yes | To simulate the federated setting, we randomly split each dataset into K = 3, 5, and 10 clients. We propose two random splitting strategies for heterogeneous graphs: (1) Random Edges (RE) which randomly allocates edges to K clients and (2) Random Edge Types (RET) which randomly allocates edge types to K clients. Averaged statistics of the BGS dataset split into K clients are summarized in Table 2. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using "Deep Graph Library (DGL)" and "RGCN-like architecture" but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | Before starting the federated training process, the server and the clients would negotiate for the hyperparameters of the HGNN model used, such as the number of bases (B) in SWD. The client-side objective function is a sum of the task-specific loss and the CA regularization term: Lk = Lk task + λLk align... From Figure 4, we find that Fed HGN is sensitive to the choice of B. The optimal performance is reached at around B = 35 and B = 20 for AIFB (RE) and AIFB (RET), respectively... For the alignment regularization factor λ, Fed HGN performance does not change much when λ is reasonable, i.e., for λ 5. But when λ gets larger than 5, the testing accuracy drops dramatically on both AIFB (RE) and AIFB (RET). Hence, we just set λ = 0.5 for our main experiments. |