Adapt to Adaptation: Learning Personalization for Cross-Silo Federated Learning
Authors: Jun Luo, Shandong Wu
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically evaluate our method s convergence and generalization behaviors, and perform extensive experiments on two benchmark datasets and two medical imaging datasets under two non-IID settings. |
| Researcher Affiliation | Academia | Jun Luo1 , Shandong Wu1,2,3,4 1Intelligent Systems Program, University of Pittsburgh 2Department of Radiology, University of Pittsburgh 3Department of Biomedical Informatics, University of Pittsburgh 4Department of Bioengineering, University of Pittsburgh jul117@pitt.edu, wus3@upmc.edu |
| Pseudocode | Yes | Algorithm 1 APPLE |
| Open Source Code | Yes | The code is publicly available at https: //github.com/ljaiverson/p FL-APPLE. |
| Open Datasets | Yes | Datasets. We use four public datasets including two benchmark datasets: MNIST and CIFAR10, and two medical imaging datasets from the Med MNIST datasets collection [Yang et al., 2021], namely the Organ MNIST(axial) dataset: an 11-class of liver tumor image dataset, and the Path MNIST dataset: a 9-class colorectal cancer image dataset. |
| Dataset Splits | No | The paper mentions partitioning datasets into a "training set and a test set" and discusses training for a certain number of "rounds" and "local epochs," but it does not explicitly provide details about a validation set split (e.g., percentages or counts). |
| Hardware Specification | No | The paper mentions using "the Bridges-2 system... at the Pittsburgh Supercomputing Center" but does not specify any particular CPU models, GPU models, or detailed hardware configurations (e.g., memory, specific processor types) used for the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We train each method 160 rounds with 5 local epochs and summarize the results as follows. In Equation 7, λ is a dynamic function ranging from 0 and 1, with respect to the round number, r, and µ is a scalar coefficient for the proximal term. More details regarding the loss scheduler are presented in Appendix A.1. |