PDE-Based Optimal Strategy for Unconstrained Online Learning

Authors: Zhiyu Zhang, Ashok Cutkosky, Ioannis Paschalidis

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Experiment Our theoretical results are supported by experiments. In this section, we test our one-dimensional unconstrained OLO algorithm (Algorithm 5) on a synthetic Online Convex Optimization (OCO) task, based on the standard reduction from OCO to OLO.
Researcher Affiliation Academia Zhiyu Zhang 1 Ashok Cutkosky 1 Ioannis Ch. Paschalidis 1 1Boston University. Correspondence to: Zhiyu Zhang <zhiyuz@bu.edu>, Ashok Cutkosky <ashok@cutkosky.com>, Ioannis Ch. Paschalidis <yannisp@bu.edu>.
Pseudocode Yes Algorithm 1 From coin-betting to OLO.
Open Source Code Yes Code is available at https://github.com/zhiyuzz/ICML2022-PDE-Potential.
Open Datasets Yes We use the Year Prediction MSD dataset (Bertin-Mahieux et al., 2011) available from the UCI Machine Learning Repository (Dua & Graff, 2017)
Dataset Splits No The paper mentions using a 'linear model' and preprocessing steps but does not specify any training/validation/test dataset splits (percentages, counts, or predefined standard splits) needed for reproduction.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as CPU or GPU models, memory, or cloud computing instance types.
Software Dependencies No The paper describes the algorithms and experimental procedures but does not list specific software dependencies (e.g., Python, PyTorch, TensorFlow) along with their version numbers, which would be necessary for exact reproduction.
Experiment Setup Yes Specifically we choose C = 1 in both versions. One reason is that this is the most natural choice when no information is available beforehand. More importantly, at the beginning of the optimization process, C = 1 induces the same asymptotic exponential growth rate for the predictions of the two versions.