DePRL: Achieving Linear Convergence Speedup in Personalized Decentralized Learning with Shared Representations

Authors: Guojun Xiong, Gang Yan, Shiqiang Wang, Jian Li

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results support our theoretical findings showing the superiority of our method in data heterogeneous environments. and Evaluation. To examine the performance of De PRL and verify our theoretical results, we conduct experiments on different datasets with representative DNN models and compare with a set of baselines.
Researcher Affiliation Collaboration 1Stony Brook University 2Binghamton University 3IBM T. J. Watson Research Center
Pseudocode Yes Algorithm 1: De PRL
Open Source Code No The paper states 'We implement all algorithms in Py Torch (Paszke et al. 2017) on Python 3 with three NVIDIA RTX A6000 GPUs.' but does not provide any explicit statement about making its own source code available or a link to a repository.
Open Datasets Yes We use (i) three image classification datasets: CIFAR-100, CIFAR-10 (Krizhevsky, Hinton et al. 2009) and Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017); and (ii) a human activity recognition dataset: HARBox (Ouyang et al. 2021).
Dataset Splits No The paper describes how data is partitioned among workers to simulate non-IID scenarios ('we simulate a heterogeneous partition into N workers by sampling pi Dir N(π)') and mentions workers have 'local dataset Di', but it does not provide specific percentages or counts for training, validation, or test splits for these local datasets. It refers to 'local test accuracy' without defining the split.
Hardware Specification Yes We implement all algorithms in Py Torch (Paszke et al. 2017) on Python 3 with three NVIDIA RTX A6000 GPUs.
Software Dependencies No The paper states 'We implement all algorithms in Py Torch (Paszke et al. 2017) on Python 3' but does not provide specific version numbers for PyTorch or the minor version of Python 3.
Experiment Setup Yes The total worker number is 128, and the epoch number for local head update is 2. and when training all considered models using different datasets with α = 0.3