Accelerated consensus via Min-Sum Splitting
Authors: Patrick Rebeschini, Sekhar C. Tatikonda
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper we show that the Min-Sum scheme based on splitting yields convergence to the consensus solution, and we analytically establish rates of convergence for any graph topology. First, we show that a certain parametrization of the Min-Sum protocol for consensus yields a linear message-passing update for any graph and for any choice of initial conditions. Second, we show that the introduction of the splitting parameters is not only fundamental to guarantee the convergence and correctness of the Min-Sum scheme in the consensus problem, but that proper tuning of these parameters yields accelerated (i.e., subdiffusive ) asymptotic rates of convergence. We establish a square-root improvement for the asymptotic convergence time over diffusive methods, which allows Min-Sum Splitting to scale like O(D log(D/ε)) for cycles and tori. Our results show that Min-Sum schemes are competitive and get close to the optimal rate O(D log(1/ε)) recently established for some algorithms based on Nesterov s acceleration [30, 36]. The main tool used for the analysis involves the construction of an auxiliary linear process supported on the nodes of the original graph to track the evolution of the Min-Sum Splitting algorithm, which is instead supported on the directed edges. |
| Researcher Affiliation | Academia | Patrick Rebeschini Department of Statistics University of Oxford patrick.rebeschini@stats.ox.ac.uk Sekhar Tatikonda Department of Electrical Engineering Yale University sekhar.tatikonda@yale.edu |
| Pseudocode | Yes | Algorithm 1: Min-Sum Splitting; Algorithm 2: Min-Sum Splitting, consensus problem, quadratic case; Algorithm 3: Auxiliary message-passing |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the methodology described. |
| Open Datasets | No | The paper is theoretical and does not involve empirical training on a dataset. It discusses the problem formulation with `φv(z) := 1/2z^2 - bvz` but does not refer to a publicly available dataset for empirical evaluation. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical dataset splits for validation. Therefore, no details on training/validation/test splits are provided. |
| Hardware Specification | No | The paper is theoretical and does not describe any experiments that would require specific hardware. Therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe any empirical implementations that would require specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe an empirical experimental setup with hyperparameters or system-level training settings. It defines algorithmic parameters, but not an experimental configuration. |