Federated Adaptive Prompt Tuning for Multi-Domain Collaborative Learning
Authors: Shangchao Su, Mingzhao Yang, Bin Li, Xiangyang Xue
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform extensive experiments on two multi-domain image classification datasets across two different settings supervised and unsupervised. The results show that Fed APT can achieve better performance with less than 10% of the number of parameters of the fully trained model, and the global model can perform well in diverse client domains simultaneously. |
| Researcher Affiliation | Academia | Shanghai Key Laboratory of Intelligent Information Processing School of Computer Science, Fudan University {scsu20, mzyang20, libin, xyxue}@fudan.edu.cn |
| Pseudocode | No | The paper describes the algorithm steps in text and flow diagrams but does not provide a formal pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We adopt two datasets, Office-Caltech10 (Gong et al. 2012) and Domain Net (Peng et al. 2019a). |
| Dataset Splits | No | The paper describes how datasets are split across clients and domains (e.g., 'we use each domain as a client', 'split each domain in Domain Net into five clients'), but it does not specify explicit train/validation/test splits (e.g., percentages or counts) for the overall datasets or individual client datasets. |
| Hardware Specification | Yes | All experiments are completed with one Ge Force RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions 'We use Py Torch to implement all methods' but does not specify the version number of PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We set Office-Caltech10 with a learning rate of 0.001 and batch size of 32, and Domain Net with a learning rate of 0.01 and batch size of 256. The global communication round Tg is set to 50, and the local training epoch Tl is set to 1. The length of prompts s is 16. |