On Online Optimization: Dynamic Regret Analysis of Strongly Convex and Smooth Problems

Authors: Ting-Jui Chang, Shahin Shahrampour6966-6973

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Fig. 1, we can see that the regret incurred using the stale function information grows much faster than the one using the perfectly-predicted information, which verifies the theoretical advantage of OON.
Researcher Affiliation Academia Ting-Jui Chang, Shahin Shahrampour Wm Michael Barnes 64 Department of Industrial and Systems Engineering Texas A&M University, College Station, TX, USA {tingjui.chang, shahin}@tamu.edu
Pseudocode Yes Algorithm 1 Online Preconditioned Gradient Descent (OPGD); Algorithm 2 Optimistic Online Newton (OON); Algorithm 3 Online Multiple Gradient Descent (OMGD) (Zhang et al. 2017); Algorithm 4 Online Preconditioned Gradient Descent (OPGD) for Constrained Setup; Algorithm 5 Online Multiple Gradient Descent (OMGD) (Zhang et al. 2017) for Constrained Setup
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper describes a synthetic function sequence generated for experiments but does not provide access information (link, DOI, repository, or citation) for a publicly available dataset.
Dataset Splits No The paper conducts experiments on a synthetically generated function sequence but does not mention dataset splits (e.g., train/validation/test percentages or counts) as it's not a standard dataset evaluation.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Consider a function sequence of the following form ft(x) = (x x t ) Qt(x x t ) 2+ 1 2(x x t ) Qt(x x t ), where Qt is a positive definite matrix and αI Qt βI (α = 1 and β = 30). ... The optimal point of the next function is randomly selected from the sphere centered at the current optimal point with radius α LH .