Federated Prompt Learning for Weather Foundation Models on Devices

Authors: Shengchao Chen, Guodong Long, Tao Shen, Jing Jiang, Chengqi Zhang

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrates Fed Po D leads the performance among state-of-the-art baselines across various setting in realworld on-device weather forecasting datasets.
Researcher Affiliation Academia Shengchao Chen , Guodong Long , Tao Shen , Jing Jiang and Chengqi Zhang Australian Artificial Intelligence Institute, FEIT, University of Technology Sydney shengchao.chen.uts@gmail.com, {guodong.long, tao.shen, jing.jiang, chengqi.zhang}@uts.edu.au
Pseudocode Yes Algorithm 1 Implementation of PT and PV Updating, Algorithm 2 Implementation of Fed Po D
Open Source Code No The paper does not provide a direct link to open-source code for the Fed Po D methodology or explicitly state that the code is released.
Open Datasets Yes Three weather multivariate time-series datasets from [Chen et al., 2023b], including Ave PRE, Sur TEMP, and Sur UPS collected by 88, 525, and 238 devices, respectively. Detailed information can be found at Appendix A.
Dataset Splits No The paper does not explicitly state train/validation/test dataset splits by percentages or sample counts. It mentions training over epochs and communication rounds, but not the data partitioning strategy for validation.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software components, libraries, or programming languages used in the experiments.
Experiment Setup Yes Main experiments are conducted in 25 local epoch within 50 communication round. Following [Chen et al., 2022], Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) are used as evaluation metrics. All results are in 100 the original value for a clearer comparison. Detailed information about the implementation, local updating process and the aggregation can be found at Appendix B. Our configuration is as follows: 5 local epochs and 10 communication rounds, while other settings follow main experiments.