A Unified Convergence Theorem for Stochastic Optimization Methods

Authors: Xiao Li, Andre Milzarek

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work, we provide a fundamental unified convergence theorem used for deriving expected and almost sure convergence results for a series of stochastic optimization methods.
Researcher Affiliation Academia Xiao Li School of Data Science (SDS) Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS) The Chinese University of Hong Kong, Shenzhen Shenzhen, China lixiao@cuhk.edu.cn Andre Milzarek School of Data Science (SDS) Shenzhen Research Institute of Big Data (SRIBD) The Chinese University of Hong Kong, Shenzhen Shenzhen, China andremilzarek@cuhk.edu.cn
Pseudocode No The paper presents algorithmic update equations (e.g., xk+1 = prox k'(xk kgk)) but does not include structured pseudocode blocks or sections labeled 'Algorithm'.
Open Source Code No Under '3. If you ran experiments...', the question 'Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?' is answered with '[N/A]', indicating no code is provided.
Open Datasets No The paper focuses on theoretical convergence analysis and does not involve the use of datasets for training or any other purpose.
Dataset Splits No The paper focuses on theoretical convergence analysis and does not involve the use of datasets or their splits for validation.
Hardware Specification No The paper is theoretical and does not describe any experiments that would require specific hardware specifications.
Software Dependencies No The paper is theoretical and does not describe any experiments that would require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup or hyperparameter details.