Lyapunov Functions for First-Order Methods: Tight Automated Convergence Guarantees
Authors: Adrien Taylor, Bryan Van Scoy, Laurent Lessard
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our main results are then presented in Section 4, which also features numerical examples and comparisons to other approaches. The performance estimation toolbox PESTO (Taylor et al., 2017a) can be used to perform numerical validations. Numerical results are provided in Figure 4. |
| Researcher Affiliation | Academia | 1INRIA, D epartement d informatique de l ENS, Ecole normale sup erieure, CNRS, PSL Research University, Paris, France 2Wisconsin Institute for Discovery, University of Wisconsin Madison, Madison, Wisconsin, USA 3Department of Elecrical and Computer Engineering, University of Wisconsin Madison, Madison, Wisconsin, USA. Correspondence to: Adrien Taylor <adrien.taylor@inria.fr>, Bryan Van Scoy <vanscoy@wisc.edu>, Laurent Lessard <laurent.lessard@wisc.edu>. |
| Pseudocode | No | The paper does not contain pseudocode or a clearly labeled algorithm block. |
| Open Source Code | Yes | The code used to implement (ρ-SDP) and generate the figures in this paper is available at https://github. com/QCGroup/quad-lyap-first-order. |
| Open Datasets | No | The paper performs theoretical analysis and numerical comparisons of optimization algorithms. It does not mention the use of any specific public or open datasets for training or evaluation, nor does it provide any access information for such datasets. |
| Dataset Splits | No | The paper does not mention training/test/validation dataset splits, as it focuses on theoretical analysis and numerical comparisons of optimization algorithms rather than empirical evaluation on datasets. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running experiments. It mentions using "the performance estimation toolbox PESTO" for numerical validations, but not the hardware specifics. |
| Software Dependencies | No | The paper mentions "the performance estimation toolbox PESTO (Taylor et al., 2017a)" but does not provide specific version numbers for any software or libraries. |
| Experiment Setup | Yes | Many first-order optimization methods are of the form (M), including: the Gradient Method, Heavy Ball Method (Polyak, 1964), Fast Gradient Method for smooth strongly convex minimization (Nesterov, 2004), Triple Momentum Method (Van Scoy et al., 2018), and Robust Momentum Method (Cyrus et al., 2018). Each of these methods can be parametrized as yk = xk + γ (xk xk 1) (14a) xk+1 = xk + β (xk xk 1) α f(yk) (14b) for k 0 where x 1, x0 Rd are the initial conditions, and the parameters for each method are: Method α β γ GM 1/L 0 0 HBM (2 / ( κ + 1))2 ( κ 1) / ( κ + 1))2 0 FGM 1/L ( κ 1) / ( κ + 1) ( κ 1) / ( κ + 1) TMM (L+ µ)2 κ 1 κ+1 2 0. |