Sharper Generalization Bounds for Learning with Gradient-dominated Objective Functions

Authors: Yunwen Lei, Yiming Ying

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We report some preliminary experiments to support our theory. We consider the dataset IJCNN available from the LIBSVM website (Chang & Lin, 2011) and report the average of experimental results from 25 repetitions.
Researcher Affiliation Academia 1School of Computer Science, University of Birmingham, Birmingham B15 2TT, United Kingdom 2Department of Computer Science, TU Kaiserslautern, Kaiserslautern 67653, Germany 3Department of Mathematics and Statistics, State University of New York at Albany, USA y.lei@bham.ac.uk yying@albany.edu
Pseudocode Yes The framework of stochastic variance-reduced optimization is described in Algorithm 1 in Appendix D.3.
Open Source Code No No statement or link providing concrete access to open-source code for the methodology described in this paper.
Open Datasets Yes We consider the dataset IJCNN available from the LIBSVM website (Chang & Lin, 2011)
Dataset Splits Yes We use 80 percents of the dataset for training and reserve the remaining 20 percents for testing.
Hardware Specification No No specific hardware details (like GPU/CPU models or cloud instances) are mentioned for running experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers for reproducibility.
Experiment Setup Yes We apply SGD with the step size ηt = 1/(1 + 0.001t) and compute the testing error of {wt} on the testing dataset.