FedCompetitors: Harmonious Collaboration in Federated Learning with Competing Participants
Authors: Shanli Tan, Hao Cheng, Xiaohu Wu, Han Yu, Tiantian He, Yew Soon Ong, Chongjun Wang, Xiaofeng Tao
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on real-world and synthetic data demonstrate its efficacy compared to five alternative approaches, and its ability to establish efficient collaboration networks among FL-PTs. |
| Researcher Affiliation | Academia | 1National Engineering Research Center of Mobile Network Technologies, Beijing University of Posts and Telecommunications, China 2State Key Laboratory for Novel Software Technology, Nanjing University, China 3School of Computer Science and Engineering, Nanyang Technological University, Singapore 4Agency for Science, Technology and Research, Singapore |
| Pseudocode | Yes | Algorithm 1: Collaborator Selection; Algorithm 2: ILP Solver |
| Open Source Code | No | No explicit statement or link for open-source code release. |
| Open Datasets | Yes | We conduct experiments on synthetic data and the CIFAR-10 dataset. To investigate the practicality of Fed Competitors, we also adopt the electronic health record (EHR) dataset e ICU (Pollard et al. 2018)... |
| Dataset Splits | No | The paper describes how data is prepared and distributed among FL-PTs, but does not provide explicit training/validation/test dataset splits with percentages or counts for reproduction. |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models, memory, or processing units) used for experiments are mentioned. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the experiments. |
| Experiment Setup | No | The paper describes general model architectures (MLP with one hidden layer) and references external works for setting up some aspects (e.g., 'the setting in (Cui et al. 2022) for CIFAR-10'), but does not explicitly provide specific hyperparameter values or detailed training configurations within the text. |