Effective Federated Graph Matching
Authors: Yang Zhou, Zijie Zhang, Zeru Zhang, Lingjuan Lyu, Wei-Shinn Ku
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we have evaluated the performance of our UFGM model and other comparison methods for federated graph matching over serval representative federated graph datasets to date. |
| Researcher Affiliation | Collaboration | 1Auburn University, USA 2Sony AI, Japan. |
| Pseudocode | No | The paper describes its methods using prose and mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | We promise to release our open-source codes on Git Hub and maintain a project website with detailed documentation for long-term access by other researchers and end-users after the paper is accepted. |
| Open Datasets | Yes | Datasets. We focus on three representative graph learning benchmark datasets: social networks (SNS) (Zhang et al., 2015), protein-protein interaction networks (PPI) (Zitnik & Leskovec, 2017), and DBLP coauthor graphs (DBLP) (DBL). |
| Dataset Splits | No | For the supervised learning methods, the training data ratio over the above three datasets is all fixed to 20%. We train the models on the training set and test them on the test set for three datasets. The paper only mentions training and test sets, without specifying a validation split. |
| Hardware Specification | Yes | The experiments were conducted on a compute server running on Red Hat Enterprise Linux 7.2 with 2 CPUs of Intel Xeon E5-2650 v4 (at 2.66 GHz) and 8 GPUs of NVIDIA Ge Force GTX 2080 Ti (with 11GB of GDDR6 on a 352-bit memory bus and memory bandwidth in the neighborhood of 620GB/s), 256GB of RAM, and 1TB of HDD. |
| Software Dependencies | Yes | The codes were implemented in Python 3.7.3 and Py Torch 1.0.14. We also employ Numpy 1.16.4 and Scipy 1.3.0 in the implementation. |
| Experiment Setup | Yes | All models were trained for 2,000 rounds, with a batch size of 500, and a learning rate of 0.05. |