Projection-free Online Learning in Dynamic Environments

Authors: Yuanyu Wan, Bo Xue, Lijun Zhang10067-10075

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results validate the efficiency and effectiveness of our algorithm. In this section, we perform numerical experiments in dynamic environments to verify the efficiency and effectiveness of our Multi-OCG+.
Researcher Affiliation Academia National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China {wanyy, xueb, zhanglj}@lamda.nju.edu.cn
Pseudocode Yes Algorithm 1 CG, Algorithm 2 OCG+, Algorithm 3 Multi-OCG+
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the described methodology.
Open Datasets Yes We use a publicly available dataset Movie Lens 100K1, which originally contains 100000 ratings in {1, 2, 3, 4, 5} by 943 users on 1682 movies... 1https://grouplens.org/datasets/movielens/100k/
Dataset Splits No The paper mentions dividing the dataset into 'T = 3000 partitions' for online learning but does not specify traditional train/validation/test splits with percentages or sample counts for model training.
Hardware Specification Yes All algorithms are implemented wtih Matlab R2016b and tested on a linux machine with 2.4GHz CPU and 768GB RAM.
Software Dependencies Yes All algorithms are implemented wtih Matlab R2016b
Experiment Setup Yes For our Multi-OCG+, we set H = {γi = 2i|i = 0, ..., log2(T)}. Since ft(X) is not strongly convex, the parameter τ is set to be s/√T, where s is selected from {1e-4, 1e-3, ..., 1.0}. Besides, the parameter ηγ of each expert Eγ is set to be c/√γ, where c is selected from {0.1, 1.0, ..., 1e6}. ... we simply set Kγ = 4 for our Multi-OCG+ to reduce the time cost.