Formal Logic Enabled Personalized Federated Learning through Property Inference
Authors: Ziyan An, Taylor T. Johnson, Meiyi Ma
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the proposed method on two tasks: a real-world traffic volume prediction task consisting of sensory data from fifteen states and a smart city multi-task prediction utilizing synthetic data. The evaluation results exhibit clear improvements, with performance accuracy improved by up to 54% across all sequential prediction models. |
| Researcher Affiliation | Academia | Ziyan An, Taylor T. Johnson, Meiyi Ma Department of Computer Science, Vanderbilt University, Nashville, TN, USA {ziyan.an, taylor.johnson, meiyi.ma}@vanderbilt.edu |
| Pseudocode | Yes | Algorithm 1: CLUSTER ID: Cluster identity mapping; Algorithm 2: Fed STL: Client federation and update |
| Open Source Code | Yes | Code implementation is available at https://github.com/AICPSLab/Fed STL.git. |
| Open Datasets | Yes | We obtain a publicly available dataset from the Federal Highway Administration (FHWA 2016) and preprocess hourly traffic volume from 15 states. ... we create a simulated dataset using SUMO (Simulation of Urban MObility) (Krajzewicz et al. 2002), a large-scale open-source road traffic simulator. |
| Dataset Splits | No | The paper does not provide specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or k-fold cross-validation setup). |
| Hardware Specification | Yes | The experiments were conducted on a machine equipped with an Intel Core i9-10850K CPU and an NVIDIA GeForce RTX 3070 GPU. |
| Software Dependencies | No | The paper mentions 'The operating system used was Ubuntu 18.04.' but does not list specific versions for other key software components like machine learning frameworks (e.g., PyTorch, TensorFlow) or libraries. |
| Experiment Setup | Yes | During each round of FL communication, we randomly select 10% of the client devices to participate. For all the conducted experiments and algorithms, we use SGD with consistent learning rates and a batch size of 64. ... We set the number of local epochs to 10 for Fed Avg, Fed Prox, Fed Rep (with 8 head epochs), Ditto, and IFCA. Additionally, for Fed STL, we employ 6 local epochs and 4 cluster training epochs. |