On Sample Optimality in Personalized Collaborative and Federated Learning

Authors: Mathieu Even, Laurent Massoulié, Kevin Scaman

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We numerically illustrate our theory in Appendix A on synthetic datasets, with clustered agents (as in this section), as well as in a setting where agents are distributed according to a more general distribution of agent .
Researcher Affiliation Collaboration 1Inria Paris Département d informatique de l ENS, PSL Research University 2Microsoft-Inria Joint Center
Pseudocode Yes Algorithm 1 All-for-all algorithm
Open Source Code No No explicit statement about providing open-source code for the methodology described in this paper or a direct link to a code repository was found.
Open Datasets Yes for the MNIST dataset, deff is less than 3, while the ambient dimension is 712 [22].
Dataset Splits No No specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning was found.
Hardware Specification No No specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running experiments were provided.
Software Dependencies No No specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment were provided.
Experiment Setup No The paper does not provide specific experimental setup details such as concrete hyperparameter values, optimizer settings, or detailed training configurations for the algorithms.