Federated Graph Semantic and Structural Learning
Authors: Wenke Huang, Guancheng Wan, Mang Ye, Bo Du
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on three graph datasets manifest the superiority of the proposed method over counterparts. and 4 Experiments |
| Researcher Affiliation | Academia | 1School of Computer Science, Wuhan University, Wuhan, China 2 Hubei Luojia Laboratory, Wuhan, China {wenkehuang, guanchengwan, yemang, dubo}@whu.edu.cn |
| Pseudocode | Yes | Algorithm 1: The FGSSL Framework |
| Open Source Code | No | No explicit statement or link for open-source code release was found. |
| Open Datasets | Yes | Cora [Mc Callum et al., 2000] dataset consists of 2708 scientific publications classified into one of seven classes. ... Citeseer [Giles et al., 1998] dataset consists of 3312 scientific publications classified into one of six classes and 4732 edges. ... Pubmed [Sen et al., 2008] dataset consists of 19717 scientific papers on diabetes... |
| Dataset Splits | Yes | To conduct the experiments uniformly and fairly, we split the nodes into train/valid/test sets, where the ratio is 60% : 20% : 20% . |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, cloud instance types) were mentioned for running experiments. |
| Software Dependencies | No | No specific software dependencies with version numbers were mentioned. |
| Experiment Setup | Yes | The hidden dimensions are 128 for all datasets, and classifier F maps the embedding from 128 dimensions to 7,6,3 dimensions, which is the number of classification classes for Cora, Citeseer, and Pubmed respectively. As for all networks, we use SGD [Robbins and Monro, 1951] as the selected optimizer with momentum 0.9 and weight decay 5e 4. The communication round is 200 and the local training epoch is 4 for all datasets. |