PerFedRLNAS: One-for-All Personalized Federated Neural Architecture Search

Authors: Dixi Yao, Baochun Li

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments, we empirically show that our framework shows much better performance with respect to personalized accuracy and overall time compared to state-of-the-art methods. We empirically compare Per Fed RLNAS with the state-of-the-art personalized federated learning methods and previous federated neural architecture works to see how well our method solves the heterogeneous problems. Dataset, tasks, and models. We study on image classification tasks with CIFAR10 and CIFAR100 (Krizhevsky, Hinton et al. 2009).
Researcher Affiliation Academia University of Toronto dixi.yao@mail.utoronto.ca, bli@ece.toronto.edu
Pseudocode Yes Algorithm 1: Per Fed RLNAS
Open Source Code Yes Our source code is released in https://github.com/TL-System/plato/tree/main/ examples/model search/pfedrlnas.
Open Datasets Yes We study on image classification tasks with CIFAR10 and CIFAR100 (Krizhevsky, Hinton et al. 2009).
Dataset Splits No The paper mentions 'training samples and test samples are equally partitioned over all the clients' but does not explicitly provide details about a validation set or specific split percentages for train/validation/test splits.
Hardware Specification No The paper mentions 'physical (GPU) memory' but does not provide specific details on the hardware used for experiments, such as GPU or CPU models.
Software Dependencies No All experiments are performed on the federated learning framework Plato (Li et al. 2023a).
Experiment Setup Yes In each communication round, each client does the local training for 5 epochs. We set the upload and download data transmit rate to 100Mbps. Random seeds are fixed during all experiments.