Dimension-free Complexity Bounds for High-order Nonconvex Finite-sum Optimization

Authors: Dongruo Zhou, Quanquan Gu

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical More specifically, we propose stochastic high-order algorithms with novel first-order and high-order derivative estimators, which can achieve dimension-free complexity bounds. With the access to p-th order derivatives of the objective function, we prove that our algorithm finds ϵstationary points with O(n(2p 1)/(2p)/ϵ(p+1)/p) high-order oracle complexities
Researcher Affiliation Academia 1Department of Computer Science, University of California, Los Angeles, CA 90095, USA. Correspondence to: Quanquan Gu <qgu@cs.ucla.edu>.
Pseudocode Yes Algorithm 1 High-order Meta Algorithm, Algorithm 2 OP-TE, Algorithm 3 TP-TE
Open Source Code No The paper does not contain any statement or link indicating that source code for the described methodology is publicly available.
Open Datasets No The paper presents theoretical work on complexity bounds for optimization problems. It does not mention specific datasets used for training or evaluation, nor does it provide any links or citations to publicly available datasets.
Dataset Splits No The paper focuses on theoretical analysis and does not describe empirical experiments involving dataset splits for training, validation, or testing.
Hardware Specification No The paper focuses on theoretical complexity analysis of algorithms. It does not mention any specific hardware used for running experiments.
Software Dependencies No The paper focuses on theoretical derivations and algorithms. It does not list any software dependencies with specific version numbers.
Experiment Setup No The paper describes theoretical algorithms and their complexity bounds, but it does not detail an experimental setup, hyperparameters, or training configurations.