Finite-Time Convergence in Continuous-Time Optimization

Authors: Orlando Romero, Mouhacine Benosman

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conducted some numerical experiments to illustrate our results. In this section, we illustrate the finite-time convergence properties of the q-RGF (19) and our designed second-order flow (27) on academic optimization test functions.
Researcher Affiliation Collaboration 1Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, USA. 2Mitsubishi Electric Research Laboratories, Cambridge, MA, USA.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code, nor does it explicitly state that source code for the methodology is available.
Open Datasets No The paper uses a synthetically generated dataset based on a log-sum-exp function with parameters sampled from a N(0,1) distribution, but does not provide access information for a publicly available or open dataset.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology).
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes First, we fix x0 = 3/4 and vary q > 1. The results are reported in Figure 1. [...] Next, we fix q = 10 and vary x0 R near x = 0, while maintaining every other parameter the same as before. [...] We now test the second-order flow (27) with (c, α, r) = ( f(x0) , 1/2, 1) on the optimization testbed function known as the Rosenbrock function, namely f : R2 R given by f(x1, x2) = (a x1)2 + b(x2 x2 1)2, with parameters a, b R.