Stochastic Adaptive Quasi-Newton Methods for Minimizing Expected Values

Authors: Chaoxu Zhou, Wenbo Gao, Donald Goldfarb

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compared several implementations of SA-GD and SA-BFGS against the original SGD method and the robust SGD method (Nemirovski et al., 2009) on a penalized least squares problem with random design. ... Figure 1 shows the performance of each algorithm on a series of problems with varying problem size p and parameter ",.
Researcher Affiliation Academia 1Dept. of Industrial Engineering and Operations Research, Columbia University.
Pseudocode Yes Algorithm 1 SA-GD
Open Source Code No The paper does not contain any statement about releasing source code for the described methodology or a link to a code repository.
Open Datasets No The objective function is based on a 'penalized least squares problem with random design' where 'X was drawn according to a multivariate N(0, \Sigma(\rho))'. This describes a synthetically generated dataset rather than a publicly available one with concrete access information.
Dataset Splits No The paper defines problem sizes (p=100 and p=500) and describes the generation of random data for a penalized least squares problem. However, it does not specify any explicit train/validation/test dataset splits (e.g., percentages or counts) or reference standard splits from a benchmark dataset.
Hardware Specification Yes The algorithms were implemented in Matlab 2015a, and the system was an Intel i5-5200U running Ubuntu.
Software Dependencies Yes The algorithms were implemented in Matlab 2015a
Experiment Setup Yes SGD: SGD with fixed mk = p and diminishing step sizes tk = 1 k+1000 for problems with p = 100, and tk = 1 k+5000 for p = 500.