Achieving Linear Convergence with Parameter-Free Algorithms in Decentralized Optimization

Authors: Ilya Kuruzov, Gesualdo Scutari, Alexander Gasnikov

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Preliminary numerical experiments support our theoretical findings, demonstrating superior performance in convergence speed and scalability.
Researcher Affiliation Academia Ilya Kuruzov Innopolis University kuruzov.ia@phystech.edu. Gesualdo Scutari Purdue University gscutari@purdue.edu. Alexander Gasnikov Innopolis University gasnikov@yandex.ru
Pseudocode Yes Algorithm 1 Data: ... Algorithm 2 Backtracking(...)
Open Source Code Yes code in the form of an attached archive.
Open Datasets No Ridge regression: It is an instance of (P), with fi(x) = Aixi bi 2 + σ xi 2 2, where we set Ai R20 300, bi R20, and σ = 0.1.The elements of Ai, bi are independently sampled from the standard normal distribution
Dataset Splits No The paper uses synthetic data generated by sampling elements from a standard normal distribution but does not specify any training, validation, or test splits.
Hardware Specification Yes All experiments are run on Acer Swift 5 SF514-55TA56B6, with processor Intel(R) Core(TM) i5-8250U @ CPU 1.60GHz, 1800 MHz.
Software Dependencies No The paper does not list any specific software dependencies with version numbers.
Experiment Setup Yes For EXTRA and NIDS we use a grid-search tuning, chosen to achieve the best practical performance. Algorithm 1 and Algorithm 3 are simulated under the following choice of the line-search parameters satisfying Corollary 4.1: γk = (k + 2)/(k + 1), δ = 1. For all the algorithms we used the Metropolis-Hastings weight matrix W GW [34].