FedDA: Faster Adaptive Gradient Methods for Federated Constrained Optimization
Authors: Junyi Li, Feihu Huang, Heng Huang
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments over both constrained and unconstrained tasks to confirm the effectiveness of our approach. In this section, we perform experiments to verify the efficacy of our proposed Fed DA on federated biomarker identification task and the general classification tasks. |
| Researcher Affiliation | Academia | Junyi Li Department of Computer Science University of Maryland College Park College Park, MD 20742 junyili.ai@gmail.com Feihu Huang Electrical and Computer Engineering University of Pittsburgh Pittsburgh, PA 15261 huangfeihu2018@gmail.com Heng Huang Department of Computer Science University of Maryland College Park College Park, MD 20742 henghuanghh@gmail.com |
| Pseudocode | Yes | Algorithm 1 Fed DA-Server; Algorithm 2 Fed DA-Client (xτ, ντ, Hτ) |
| Open Source Code | No | The paper does not provide a statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | We consider a colorrectal cancer prediction task on the PATHMNIST dataset (Yang et al., 2021; Kather et al., 2019)... We consider a splice site detection task on the MEMset Donar Dataset (Meier et al., 2008)... we consider two datasets: CIFAR10 (Krizhevsky et al., 2009) and FEMNIST (Caldas et al., 2018). |
| Dataset Splits | No | The paper mentions splitting data for clients and using a test set, but it does not provide specific details on the training/validation/test splits (e.g., percentages or counts for each). |
| Hardware Specification | Yes | All experiments are run on a machine with an Intel Xeon Gold 6248 CPU and 4 Nvidia Tesla V100 GPUs. |
| Software Dependencies | No | The code is written in Pytorch. This does not specify a version number for Pytorch or any other software dependencies. |
| Experiment Setup | Yes | Number of local steps I is chosen as 5. For all methods, we tune their hyper-parameters to find the best setting. |