Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Modular Federated Contrastive Learning with Twin Normalization for Resource-limited Clients
Authors: Azadeh Motamedi, IL MIN KIM
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Results show that Res Net18 trained with MFCL(TN) on CIFAR-10 achieves 84.1% accuracy when data is severely heterogeneous while reducing the communication burden and memory footprint compared to end-to-end training. Through experiments, we demonstrate the effectiveness of the proposed MFCL, especially with TN, which achieves robust, stable, and state-of-the-art performance on severe heterogeneous and CIB data while only a small-size client module is trained federally across clients. |
| Researcher Affiliation | Academia | Azadeh Motamedi EMAIL Department of Electrical and Computer Engineering Queen s University Il-Min Kim EMAIL Department of Electrical and Computer Engineering Queen s University |
| Pseudocode | Yes | Algorithm 1 Modular Federated Contrastive Learning (MFCL) |
| Open Source Code | No | The code will be released upon paper acceptance. |
| Open Datasets | Yes | We perform our experiments on CIFAR-10, CIFAR-100 Krizhevsky (2009), and Tiny-Image Net Le & Yang (2015). |
| Dataset Splits | No | The paper refers to standard datasets like CIFAR-10, CIFAR-100, and Tiny-Image Net, and mentions reporting accuracy on a "uniform test set". It describes how data is distributed across clients using a Dirichlet distribution but does not explicitly state the train/validation/test split percentages or sample counts for the datasets themselves within the main text, relying on standard splits of these common benchmarks. |
| Hardware Specification | Yes | We used a single NVIDIA GeForce RTX 3090 GPU to simulate the clients and r Server modules. |
| Software Dependencies | Yes | We implemented MFCL with Tensor Flow 2.14 following the standard structure of FL Mc Mahan et al. (2017) and the contrastive learning Chen et al. (2020). |
| Experiment Setup | Yes | Table 12: List of hyperparameters. Data CIFAR-10/100 Tiny-Image Net Model Res Net-18 Res Net-50 Client module (CM) First 2 layers First 4 layers CM batch size 64 64 CM optimizer Adam Adam CM learning rate 0.05 0.001 FL rounds 15 15 r Server module (r SM) epochs 150 200 r SM optimizer LARS LARS r SM learning rate cosine, lr0 = 1.0 cosine, lr0 = 1.0 Table 13: List of augmentation techniques. |