Subgraph Federated Learning with Missing Neighbor Generation
Authors: Ke ZHANG, Carl Yang, Xiaoxiao Li, Lichao Sun, Siu Ming Yiu
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on four real-world graph datasets with synthesized subgraph federated learning settings demonstrate the effectiveness and efficiency of our proposed techniques. |
| Researcher Affiliation | Academia | Ke Zhang1,4, Carl Yang1 , Xiaoxiao Li2, Lichao Sun3, Siu Ming Yiu4 1Emory University, 2University of British Columbia, 3Lehigh University, 4University of Hong Kong |
| Pseudocode | Yes | Appendix A shows the pseudo code of Fed Sage+. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | We synthesize the distributed subgraph system with four widely used real-world graph datasets, i.e., Cora [25], Citeseer [25], Pub Med [22], and MSAcademic [26]. |
| Dataset Splits | Yes | The training-validation-testing ratio is 60%-20%-20% due to limited sizes of local subgraphs. |
| Hardware Specification | Yes | We implement Fed Sage and Fed Sage+ in Python and execute all experiments on a server with 8 NVIDIA Ge Force GTX 1080 Ti GPUs. |
| Software Dependencies | No | The paper mentions implementing in 'Python' but does not specify version numbers for Python or any other libraries/software dependencies like PyTorch, TensorFlow, etc., that would be needed for replication. |
| Experiment Setup | Yes | The number of nodes sampled in each layer of Graph Sage is 5. We use batch size 64 and set training epochs to 50. The training-validation-testing ratio is 60%-20%-20% due to limited sizes of local subgraphs. ... All λs are simply set to 1. Optimization is done with Adam with a learning rate of 0.001. |