Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Beyond spectral gap: the role of the topology in decentralized learning
Authors: Thijs Vogels, Hadrien Hendrikx, Martin Jaggi
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We quantify how the graph topology in๏ฌuences convergence in a quadratic toy problem and provide theoretical results for general smooth and (strongly) convex objectives. Our theory matches empirical observations in deep learning, and accurately describes the relative merits of different graph topologies. Code: github.com/epfml/topology-in-decentralized-learning |
| Researcher Affiliation | Academia | Thijs Vogels EMAIL and Hadrien Hendrikx EMAIL |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Code: github.com/epfml/topology-in-decentralized-learning |
| Open Datasets | Yes | We experiment with a variety of 32-worker topologies on Cifar-10 [9] with a VGG-11 model [19]. |
| Dataset Splits | No | The paper mentions 'train and test loss' but does not specify explicit training, validation, or test dataset splits (e.g., percentages or sample counts). The ethics checklist also states 'N/A' for specifying all training details including data splits. |
| Hardware Specification | No | The ethics review checklist states: 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No]' |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies (e.g., programming languages, libraries, or frameworks). |
| Experiment Setup | Yes | We experiment with a variety of 32-worker topologies on Cifar-10 [9] with a VGG-11 model [19]. We focus on the initial phase of training, 25k steps in our case... In Appendix F.1, we replicate the same experiments in a different setting. There, we use larger graphs (of 64 workers), a different model and data set (an MLP on Fashion MNIST [24]), and no momentum or weight decay. |