Accelerated Incremental Gradient Descent using Momentum Acceleration with Scaling Factor

Authors: Yuanyuan Liu, Fanhua Shang, Licheng Jiao

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also give experimental results justifying our theoretical results and showing the effectiveness of our algorithm. In this section, we evaluate the performance of our algorithm for justifying our theoretical results. We conducted many experiments of the strongly convex logistic regression problem on the two real-world data sets: Covtype (581,012 examples and 54 features) and a9a (32,562 examples and 123 features). Figs. 1 and 2 show how the objective gap (i.e., F(x K) F(x )) of all these algorithms decreases for logistic regression with different regularization parameters.
Researcher Affiliation Academia Yuanyuan Liu1,2 , Fanhua Shang1,2 and Licheng Jiao1,2 1Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education 2School of Artificial Intelligence, Xidian University, China {yyliu, fhshang}@xidian.edu.cn, lchjiao@mail.xidian.edu.cn
Pseudocode Yes Algorithm 1 AIGD for Strongly Convex Objectives
Open Source Code No The paper does not provide explicit statements or links for open-source code for the methodology described.
Open Datasets Yes We conducted many experiments of the strongly convex logistic regression problem on the two real-world data sets: Covtype (581,012 examples and 54 features) and a9a (32,562 examples and 123 features).
Dataset Splits No The paper does not explicitly provide details about training, validation, or test dataset splits. It only mentions the total sizes of the datasets used.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers needed to replicate the experiment.
Experiment Setup Yes For SVRG and Katyusha, we set the epoch size m = 2n, as suggested in [Johnson and Zhang, 2013; Allen-Zhu, 2018]. Figs. 1 and 2 show how the objective gap (i.e., F(x K) F(x )) of all these algorithms decreases for logistic regression with different regularization parameters λ = 10 4, 10 7, 10 8.