Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

FedCCH: Automatic Personalized Graph Federated Learning for Inter-Client and Intra-Client Heterogeneity

Authors: Pengfei Jiao, Zian Zhou, Meiting Xue, Huijun Tang, Zhidong Zhao, HuaMing Wu

IJCAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment results demonstrate that Fed CCH outperforms other state-of-the-art baseline methods. We conduct extensive experiments on 15 graph datasets, which include 6 different data combinations to assess the performance of the proposed Fed CCH framework. The results surpass multiple FL and GFL algorithms, outperforming other methods by up to 5.04% in average test accuracy on BIO-SN-CV.
Researcher Affiliation Academia 1School of Cyberspace, Hangzhou Dianzi University, Hangzhou, China 2Center for Applied Mathematics, Tianjin University, Tianjin, China EMAIL, EMAIL
Pseudocode Yes The complete algorithm is shown in the Appendix.
Open Source Code No The paper does not provide an explicit statement about open-sourcing the code or a link to a code repository.
Open Datasets Yes We use 15 public graph classification datasets from four different fields, including small molecules (MUTAG, BZR, COX2, DHFR, PTC MR, AIDS, NCI1), bioinformatics (ENZYMES, DD), social network (COLLAB, IMDBBINARY, IMDB-MULTI) and computer vision (Letter-low, Letter-high, Letter-med)[Morris et al., 2020].
Dataset Splits Yes In each setting, the client owns the corresponding data set One, by default is randomly divided into three parts: 80% for training, 10% for validation, and 10% for testing.
Hardware Specification Yes The experiments are conducted on a system equipped with an Nvidia 3090Ti GPU configuration.
Software Dependencies No The paper mentions software components like "Graph Isomorphism Network (GIN)", "Adam optimizer", but does not specify their version numbers.
Experiment Setup Yes For model, we primarily utilize a three-layer Graph Isomorphism Network (GIN) as our model. Before the activation function, we incorporate Batch Normalization and Graph Normalization layers. The hidden layer dimension of the GIN is set to 64. For the local training process, we employ 1 epoch and a batch size of 64. The optimization is performed using the Adam optimizer with a weight decay of 5 10 5 and a learning rate of 0.001. Regarding the FL settings, we set the number of communication rounds to 200 for all FL methods.