Adaptive Test-Time Personalization for Federated Learning
Authors: Wenxuan Bao, Tianxin Wei, Haohan Wang, Jingrui He
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate its superiority in handling various distribution shifts including label shift, image corruptions, and domain shift, outperforming existing TTA methods across multiple datasets and model architectures. |
| Researcher Affiliation | Academia | University of Illinois Urbana-Champaign {wbao4,twei10,haohanw,jingrui}@illinois.edu |
| Pseudocode | Yes | Algorithm 1 ATP Training |
| Open Source Code | Yes | Our code is available at https://github.com/baowenxuan/ATP. |
| Open Datasets | Yes | We evaluate ATP on a variety of models, datasets and distribution shifts. We first evaluate on CIFAR-10(-C)... To test ATP under more challenging domain shifts, we then evaluate ATP on two domain generalization datasets: Digits-5 [25] and PACS [21]. |
| Dataset Splits | Yes | Each source client has 160 training samples and 40 validation samples, while each target client has 200 unlabeled testing samples. |
| Hardware Specification | Yes | We did our experiments with single NVIDIA Tesla V100 GPU. However, our experiment should only require less than 2GB of GPU memory. |
| Software Dependencies | No | The paper mentions PyTorch once ("Here we follow the definition in Py Torch.") but does not provide specific version numbers for it or any other software dependencies, such as Python or CUDA. |
| Experiment Setup | Yes | Detailed experiment settings are given in Appendix C.1. |