Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Learning to Prompt Your Domain for Federated Vision-Language Models

Authors: Guoyizhe Wei, Feng Wang, Anshul Shah, Rama Chellappa

TMLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate ADAPT s significant efficiency and effectiveness in federated learning. For example, by learning and sharing only 0.35M parameters, ADAPT attains a 69.8% average accuracy over the six domains of Domain Net, which improves the original CLIP accuracy by 16.2%.
Researcher Affiliation Academia Guoyizhe Wei Johns Hopkins University Feng Wang Johns Hopkins University Anshul Shah Johns Hopkins University Rama Chellappa Johns Hopkins University
Pseudocode No The paper includes 'Figure 1: Local training framework' which illustrates the process, but it is a diagram with descriptive text rather than a formal pseudocode block or algorithm.
Open Source Code No The paper does not contain an unambiguous sentence where the authors state they are releasing the code for the work described, nor does it provide a direct link to a source-code repository.
Open Datasets Yes We evaluate the proposed ADAPT and baseline methods on three domain adaptation image classification benchmarks: the Domain Net (Peng et al., 2019), Office Home (Venkateswara et al., 2017), and PACS (Li et al., 2017) datasets, presented in the Appendix. [...] both pretrained on Image Net-1k (Deng et al., 2009).
Dataset Splits Yes By default, the number of clients is determined by the number of domains for each dataset, i.e. n = 6 for Domain Net and n = 4 for Office Home and PACS. [...] In our ablation study, we also further divide each domain into five splits with non-i.i.d. categories.
Hardware Specification No The paper does not mention specific hardware details like GPU models (e.g., NVIDIA A100), CPU models (e.g., Intel Xeon), or other processor types used for running the experiments.
Software Dependencies No The paper mentions optimizers like SGD and AdamW but does not specify software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA x.x).
Experiment Setup Yes We train both our model and the baseline models for 200 epochs and execute the aggregation or broadcast process after every one epoch. We train the Res Net-based models and prompt tokens by a SGD optimizer with 0.01 learning rate, 0.9 momentum, and 0.005 weight decay. ADAPT instead uses Adam W (Loshchilov & Hutter, 2019) optimizer with β1 = 0.9, β2 = 0.999, 5e-4 learning rate, and 0.01 weight decay for transformer-based models. We set the temperature coefficient τd = 0.1 in Equation 4, and set the momentum update ratio α = 0.99 in Equation 6. If not specified, all reported results are average numbers over three trials.