Beyond the Federation: Topology-aware Federated Learning for Generalization to Unseen Clients
Authors: Mengmeng Ma, Tang Li, Xi Peng
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluation on a variety of real-world datasets verifies TFL s superior OOF robustness and scalability. ... 4. Experiments ... 4.2. Evaluation on OOF-resiliency ... 4.3. Evaluation on Scalability ... 4.4. Evaluation on In-federation Performance ... 4.5. Evaluation on Effectiveness of Client Clustering ... 4.6. Ablation Study |
| Researcher Affiliation | Academia | Mengmeng Ma 1 Tang Li 1 Xi Peng 1 Deep REAL Lab, Department of Computer & Information Sciences, University of Delaware. |
| Pseudocode | Yes | Algorithm 1 Topology-aware Federated Learning |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code for the described methodology or a direct link to a code repository. |
| Open Datasets | Yes | TFL is evaluated on our curated real-world datasets (①e ICU, ②Fe TS, ③TPT-48) and standard benchmarks (④CIFAR-10/-100, ⑤PACS), spanning a wide range of tasks including classification, regression, and segmentation. ... ①e ICU (Pollard et al., 2018) ... ②Fe TS (Pati et al., 2022b) ... ③TPT-48 (Vose et al., 2014) ... ④CIFAR-10/-100 (Krizhevsky & Hinton, 2009) ... ⑤PACS (Li et al., 2017) |
| Dataset Splits | No | The paper describes training and testing splits (e.g., '3 domains are used (15 clients) for training and 1 domain (5 clients) for testing' for PACS), but does not explicitly provide details about a separate validation set or its split percentage/counts. |
| Hardware Specification | Yes | All experiments are conducted using a server with 8 NVIDIA A6000 GPUs. |
| Software Dependencies | No | The paper mentions optimizers (SGD) and model architectures (ResNet18, U-Net) but does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA 11.x). |
| Experiment Setup | Yes | For the PACS dataset... learning rate of 0.01, momentum of 0.9, weight decay of 5e 4, and a batch size of 8. ... For e ICU... 30 communication rounds, using a batch size of 64 and a learning rate of 0.01, and report the performance on unseen hospitals. Within each communication round, clients perform 5 epochs (E = 5) of local optimization using SGD. |