Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization

Authors: Peng Zhao, Yu-Jie Zhang, Lijun Zhang, Zhi-Hua Zhou

JMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section provides empirical studies to validate the effectiveness of our algorithms. ... We compare the performance on both synthetic and real-world datasets. ... We repeat the experiments five times and report the mean and the standard deviation in Figure 1 and Figure 2.
Researcher Affiliation Academia Peng Zhao EMAIL Yu-Jie Zhang EMAIL Lijun Zhang EMAIL Zhi-Hua Zhou EMAIL National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 210023, China
Pseudocode Yes Algorithm 1 Sword: meta-algorithm Input: step size pool H; learning rate ε 1: Initialization: i [N], p0,i = 1/N 2: for t = 1 to T do 3: Receive xt+1,i from base-learner Bi 4: Update weight pt+1,i by (16) 5: Predict xt+1 = PN i=1 pt+1,ixt+1,i 6: end for Algorithm 2 Sword: base-algorithm Input: step size ηi H 1: Let bx1,i, x1,i be any point in X 2: for t = 1 to T do 3: bxt+1,i = ΠX bxt,i ηi ft(xt,i) 4: xt+1,i = ΠX bxt+1,i ηi ft(xt,i) 5: Send xt+1,i to the meta-algorithm
Open Source Code Yes The implementations of all algorithms are based on Py NOL package (Li et al., 2023a). ... Long-Fei Li, Peng Zhao, Yan-Feng Xie, Lijun Zhang, and Zhi-Hua Zhou. Py NOL: : A Python Package for Non-stationary Online Learning, 2023a. URL https://github.com/li-lf/PyNOL.
Open Datasets Yes Next, we employ a real-world dataset called Sulfur recovery unit (SRU) (Gama et al., 2014; Zhao et al., 2021b), which is a regression dataset with slowly evolving distribution changes.
Dataset Splits No The paper describes how synthetic data is generated and uses a real-world dataset but does not specify explicit training/test/validation splits for either. It mentions general simulation parameters like T=50000 and S=1000 but no data partitioning methodology for evaluation.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory amounts, or specific cloud instances used for running experiments. It generally mentions "gradient computation is the most time-consuming in our simulations" but this is not a hardware specification.
Software Dependencies No The implementations of all algorithms are based on Py NOL package (Li et al., 2023a). While a software package (PyNOL) is mentioned, a specific version number for this package is not provided, which is required for a reproducible description of software dependencies.
Experiment Setup Yes Settings. We simulate the online environments as follows. ... In our simulation, we set Γ = 1, D = 2, d = 5, T = 50000, S = 1000, and δ = 2. ... we choose the Huber loss defined as 2(y by)2, for |y by| δ, δ(|y by| 1 2δ), otherwise.