Towards Fair Graph Federated Learning via Incentive Mechanisms
Authors: Chenglu Pan, Jiarong Xu, Yue Yu, Ziqi Yang, Qingbiao Wu, Chunping Wang, Lei Chen, Yang Yang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our model achieves the best trade-off between accuracy and the fairness of model gradient, as well as superior payoff fairness. |
| Researcher Affiliation | Collaboration | 1Zhejiang University 2Fudan University 3ZJU-Hangzhou Global Scientific and Technological Innovation Center 4Fin Volution Group |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link confirming the release of open-source code for the described methodology. |
| Open Datasets | Yes | We use three graph classification datasets: PROTEINS, DD, and IMDB-BINARY. |
| Dataset Splits | No | The paper states: 'We retain 10% of all the graphs as the global test set for the server, and the remaining graphs are distributed to 10 agents. In each agent, we randomly split 90% for training and 10% for testing.' No explicit validation split information is provided for local agent data. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using a GIN network and Adam optimizer, and a motif extraction method from a cited paper, but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | We set the parameters β1 and β2 in Eq. (10) as 0.05 and 1, the parameter λ in Eq. (14) as 0.1, β in Eq. (4) as 1, and the budget B of payoff as 1... We utilized a three-layer GIN network with a hidden size of 64 and a dropout rate of 0.5... An Adam optimizer with a learning rate of 0.001 and weight decay of 5e 4 is employed. The communication round is 200, the epoch of local training on agents is 1 and the batch size is 128. |