Convergence-Rate-Matching Discretization of Accelerated Optimization Flows Through Opportunistic State-Triggered Control

Authors: Miguel Vaquero, Jorge Cortes

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Various simulations show the superior performance of the proposed method in comparison with recently proposed constant-stepsize discretizations.
Researcher Affiliation Academia Miguel Vaquero Mechanical and Aerospace Engineering UC San Diego San Diego, CA 9500 mivaquerovallina@ucsd.edu Jorge Cortés Mechanical and Aerospace Engineering UC San Diego San Diego, CA 9500 cortes@ucsd.edu
Pseudocode Yes Algorithm 1 describes in pseudocode the resulting variable-stepsize integrator.
Open Source Code No The paper does not provide any explicit statement or link to open-source code for the described methodology.
Open Datasets No The objective function corresponds to the regularized logistic regression cost function, namely P10 i=1 log(1 + e yi vi,x ) + 1/2 x 2, where x R4 and we have generated the sampled points (vi, yi) randomly. This function is 1-strongly convex.
Dataset Splits No The paper does not specify training, validation, or test dataset splits. It mentions generating data randomly or using a quadratic objective function.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup Yes We set α = µ/4 and s = µ/(36L2) following the values in [24]. The objective function corresponds to the regularized logistic regression cost function, namely P10 i=1 log(1 + e yi vi,x ) + 1/2 x 2, where x R4 and we have generated the sampled points (vi, yi) randomly.