SPACE: Single-round Participant Amalgamation for Contribution Evaluation in Federated Learning
Authors: Yi-Chung Chen, Hsi-Wen Chen, Shun-Gui Wang, Ming-syan Chen
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that SPACE outperforms state-of-the-art methods in terms of both running time and Pearson s Correlation Coefficient (PCC). Furthermore, extensive experiments conducted on applications, client reweighting, and client selection highlight the effectiveness of SPACE. |
| Researcher Affiliation | Academia | Yi-Chung Chen National Taiwan University r10942081@ntu.edu.tw Hsi-Wen Chen National Taiwan University hwchen@arbor.ee.ntu.edu.tw Shun-Gui Wang National Taiwan University r11921099@ntu.edu.tw Ming-Syan Chen National Taiwan University mschen@ntu.edu.tw |
| Pseudocode | Yes | Algorithm 1 SPACE |
| Open Source Code | Yes | The code is available at https://github.com/culiver/SPACE. |
| Open Datasets | Yes | we conduct experiments on the widely adopted image dataset MNIST [24] and CIFAR10 [22]. |
| Dataset Splits | No | The paper mentions using a 'validation set' multiple times (e.g., 'the evaluation of model performance depends on the size of the validation set on the server.', 'prototypes are built for the server s validation set.'), and uses standard datasets like MNIST and CIFAR10, but does not explicitly provide the specific percentages or counts for training, validation, and test splits. |
| Hardware Specification | Yes | Execution time (in seconds) is measured on a single V100 GPU without parallel training to assess the time efficiency of the approaches. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For adjusting the utility function, we empirically set k as 100 while T as 0.95 and 0.5 for evaluation on MNIST and CIFAR10. |