Personalized Federated Learning with Contextualized Generalization
Authors: Xueyang Tang, Song Guo, Jingcai Guo
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on multiple realworld datasets show that our approach surpasses the state-of-the-art methods on test accuracy by a significant margin. |
| Researcher Affiliation | Academia | 1Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China 2The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China |
| Pseudocode | Yes | Algorithm 1 CGPFL: Personalized Federated Learning with Contextualized Generalization |
| Open Source Code | No | The paper does not provide any links to open-source code or explicitly state that code for the methodology is available. |
| Open Datasets | Yes | Three datasets including MNIST [Le Cun et al., 1998], CIFAR10 [Krizhevsky, 2009], and Fashion MNIST (FMNIST) [Xiao et al., 2017] are used in our experiments. |
| Dataset Splits | No | The paper specifies a train/test split ('75% are used for training and the remaining 25% for testing') but does not mention a separate validation set. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., CPU, GPU models) used for running the experiments. |
| Software Dependencies | No | The paper mentions using a 'neural network (DNN)' and 'CNN' but does not specify any software libraries or their version numbers (e.g., TensorFlow, PyTorch, scikit-learn versions). |
| Experiment Setup | Yes | We set N = 40, α = 1, λ = 12, S = 5, lr = 0.005 and T = 200 for MNIST and Fashion MNIST (FMNIST), and T = 300, lr = 0.03 for CIFAR10, where lr denotes the learning rate. |