Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
The Panaceas for Improving Low-Rank Decomposition in Communication-Efficient Federated Learning
Authors: Shiwei Li, Xiandi Luo, Haozhao Wang, Xing Tang, Shijie Xu, Weihong Luo, Yuhua Li, Xiuqiang He, Ruixuan Li
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results show that our approach achieves faster convergence and superior accuracy compared to relevant baseline methods. Extensive experiments are conducted on four popular datasets to evaluate the superiority of the proposed method. |
| Researcher Affiliation | Collaboration | 1Huazhong University of Science and Technology, Wuhan, China 2Shenzhen Technology University, Shenzhen, China 3Fi T, Tencent, Shenzhen, China. |
| Pseudocode | No | The paper describes methods using mathematical formulations and prose, but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | The code is available at https://github.com/Leopold1423/ fedmud-icml25. |
| Open Datasets | Yes | FMNIST (Xiao et al., 2017), SVHN (Netzer et al., 2011), CIFAR-10, and CIFAR-100 (Krizhevsky & Hinton, 2009). |
| Dataset Splits | Yes | Based on the data partitioning benchmark of FL (Li et al., 2022), we consider two kinds of non-IID data distribution, termed Non-IID-1 and Non-IID-2. In Non-IID-1, the proportion of the same label across clients follows the Dirichlet distribution (Yurochkin et al., 2019), while in Non-IID-2, each client only contains data of partial labels. For CIFAR-100, we set the Dirichlet parameter to 0.1 in Non-IID-1 and assign 10 random labels to each client in Non-IID-2. For the other datasets, we set the Dirichlet parameter to 0.3 in Non-IID-1 and assign 3 random labels to each client in Non IID-2. |
| Hardware Specification | No | The paper mentions 'HPC Platform of Huazhong University of Science and Technology' but does not provide specific details on CPU, GPU models, memory, or other hardware components. |
| Software Dependencies | No | The paper mentions optimizers (SGD) and activation functions (ReLU), but does not specify any software libraries (e.g., PyTorch, TensorFlow) with their version numbers, which are necessary for full reproducibility. |
| Experiment Setup | Yes | The number of clients is set to 100, with 10 clients randomly selected to participate in each round of training. The local epoch is set to 3, and the batch size is 64. SGD (Bottou, 2010) is employed as the local optimizer, with the learning rate tuned from the set {1.0, 0.3, 0.1, 0.03, 0.01}. The number of training rounds is set to 100 for FMNIST and SVHN, and 200 for CIFAR-10 and CIFAR-100. [...] the sub-matrices are randomly initialized with values drawn from the uniform distribution U( a, a), where a is selected from the set {0.01, 0.05, 0.1, 0.5, 1, 5, 10}. In the main experiments, the compression ratio is set to 1/32 for all methods. |