Personalized Federated Learning for Cross-City Traffic Prediction
Authors: Yu Zhang, Hua Lu, Ning Liu, Yonghui Xu, Qingzhong Li, Lizhen Cui
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on four real-world traffic datasets demonstrate significant advantages of p Fed CTP over representative state-of-the-art methods. |
| Researcher Affiliation | Academia | 1School of Software, Shandong University (SDU), China 2Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, China 3Department of People and Technology, Roskilde University, Denmark |
| Pseudocode | Yes | Algorithm 1: The p Fed CTP Framework. |
| Open Source Code | Yes | The code is available at https://github.com/ZYu Sdu/p Fed CTP. |
| Open Datasets | Yes | We evaluate the performance of p Fed CTP on four traffic speed datasets: PEMS-BAY, METR-LA [Li et al., 2017b], Di Di-Chengdu, and Di Di-Shenzhen. PEMS-BAY and METR-LA include traffic information from the San Francisco Bay Area and Los Angeles County in the USA, respectively. Di Di-Chengdu and Didi-Shenzhen are provided by the Didi GAIA Initiative [Di Di, 2020]. |
| Dataset Splits | No | The paper states, 'To simulate data scarcity in the target city, we only use 3 days of traffic data as training data,' and refers to testing, but it does not explicitly specify a validation dataset split or how validation was performed. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory amounts used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., Python, PyTorch, or TensorFlow versions) needed to replicate the experiment. |
| Experiment Setup | Yes | Other important hyperparameters are set as follows: the client number C = 4, the batch size = 32, the learning rate = 0.01, the number of GCN layers = 1, and the hidden dimensions = 32 for all methods. |