FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph Federated Learning
Authors: Yinlin Zhu, Xunkai Li, Zhengyu Wu, Di Wu, Miao Hu, Rong-Hua Li
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on six public datasets consistently demonstrate the superiority of Fed TAD over state-of-the-art baselines. In this section, we conduct experiments to verify the effectiveness of Fed TAD. |
| Researcher Affiliation | Academia | 1Sun Yat-sen University, Guangzhou, China 2Beijing Institute of Technology, Beijing, China |
| Pseudocode | Yes | The complete algorithm of Fed TAD is presented in Algorithm 1. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We perform experiments on six widely used public benchmark datasets in graph learning: three small-scale citation network datasets (Cora, Cite Seer, Pub Med [Yang et al., 2016]), two medium-scale co-author datasets (CS, Physics [Shchur et al., 2018]), and one large-scale OGB dataset (ogbn-arxiv [Hu et al., 2020]). |
| Dataset Splits | No | The paper mentions 'We perform the hyperparameter search for Fed TAD using the Optuna framework [Akiba et al., 2019]' but does not specify the training, validation, and test splits (e.g., percentages or exact counts) for the datasets used in the main experiments, which are necessary for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'we perform the hyperparameter search for Fed TAD using the Optuna framework [Akiba et al., 2019]' but does not provide specific version numbers for Optuna or any other software dependencies crucial for replication. |
| Experiment Setup | Yes | The dimension of the hidden layer is set to 64 or 128. The local training epoch and round are set to 3 and 100, respectively. The learning rate of GNN is set to 1e-2, the weight decay is set to 5e-4, and the dropout is set to 0.5. Based on this, we perform the hyperparameter search for Fed TAD using the Optuna framework [Akiba et al., 2019] on λ1 and λ2 within {10 1, 10 2, 10 3}, and I, Ig, Id within {1, 3, 5, 10}. |