First-Order Algorithms Converge Faster than $O(1/k)$ on Convex Problems
Authors: Ching-Pei Lee, Stephen Wright
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this work, we improve this rate to o(1/k). We extend the result to proximal gradient and proximal coordinate descent on regularized problems to show similar o(1/k) convergence rates. The result is tight in the sense that a rate of O(1/k1+ϵ) is not generally attainable for any ϵ > 0, for any of these methods. |
| Researcher Affiliation | Academia | 1Department of Computer Sciences and Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, Wisconsin, USA. Correspondence to: Ching-pei Lee <chingpei@cs.wisc.edu>, Stephen J. Wright <swright@cs.wisc.edu>. |
| Pseudocode | No | The paper describes algorithms mathematically and in prose, but does not contain structured pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link regarding the public availability of its source code for the described methodology. |
| Open Datasets | No | This is a theoretical paper that does not use or reference any datasets for training. |
| Dataset Splits | No | This is a theoretical paper that does not use or reference any datasets, thus no dataset split information is provided. |
| Hardware Specification | No | As a theoretical paper, no specific hardware used for running experiments is mentioned. |
| Software Dependencies | No | As a theoretical paper, no specific software dependencies with version numbers are mentioned. |
| Experiment Setup | No | As a theoretical paper, no experimental setup details such as hyperparameters or training settings are provided. |