Sharp Analysis of Stochastic Optimization under Global Kurdyka-Lojasiewicz Inequality

Authors: Ilyas Fatkhullin, Jalal Etesami, Niao He, Negar Kiyavash

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To verify the above result empirically, we simulated δt in (6) throughout all iterations of Algorithms 1 for different sets of parameters and presented the results in Figure 1 along with their corresponding convergence rates given in Corollary 1. As it is shown in these figure, the above convergence rates correctly capture the behaviour of the dynamics in (6).
Researcher Affiliation Academia Ilyas Fatkhullin ETH AI Center & ETH Zurich Jalal Etesami* EPFL Niao He ETH Zurich Negar Kiyavash EPFL
Pseudocode Yes Algorithm 1: SGD with restarts ... Algorithm 2: PAGER (PAGE with restarts)
Open Source Code No The paper does not provide any concrete access information (e.g., repository link, explicit statement of code release) for the source code.
Open Datasets No The paper focuses on theoretical analysis and simulations of dynamics, not empirical evaluation on specific, publicly available datasets. No dataset names or access information are provided.
Dataset Splits No The paper does not describe experiments on datasets with specified training/test/validation splits. It refers to simulations of theoretical dynamics.
Hardware Specification No The paper does not explicitly describe the hardware used for any computational work or simulations.
Software Dependencies No The paper does not provide specific software dependencies or version numbers.
Experiment Setup No The paper describes parameters used for simulating theoretical dynamics (e.g., h(t) = tβ, ϕ(t) = 2µ t1/α, τ values for Figure 1) but does not provide specific experimental setup details like hyperparameters, optimizers, or training configurations for machine learning models or real-world data experiments.