Linear Regularizers Enforce the Strict Saddle Property

Authors: Matthew Ubl, Matthew Hale, Kasra Yazdani

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This rule is shown to guarantee that gradient descent will escape the neighborhoods around a broad class of non-strict saddle points, and this behavior is demonstrated on numerical examples of nonstrict saddle points common in the optimization literature.
Researcher Affiliation Academia Matthew Ubl, Matthew Hale, Kasra Yazdani Department of Mechanical and Aerospace Engineering University of Florida, Gainesville, FL, 32611, USA. m.ubl@ufl.edu, kasra.yazdani@ufl.edu, matthewhale@ufl.edu
Pseudocode Yes Algorithm 1: Locally Linearly Regularized Gradient Descent
Open Source Code No The paper does not mention providing access to the source code for the described methodology.
Open Datasets No The paper uses mathematical functions (e.g., f(x, y) = 1/3x^3 + 1/2y^2, Inverted Wine Bottle) for numerical examples and demonstrations, not publicly available datasets that would typically have access information.
Dataset Splits No The paper uses mathematical functions for numerical examples and does not mention any training/test/validation dataset splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific software names with version numbers that would be needed to replicate the experiment.
Experiment Setup Yes We initialize Algorithm 1 at (1, 1) with γ = 1/54 and run using values of θ varying from 0.01 to 1.7 (θ ≈ 1.717 for this function). Each run of the algorithm terminates when f(x) + l < 10^-7.