Gradient Projection Iterative Sketch for Large-Scale Constrained Least-Squares

Authors: Junqi Tang, Mohammad Golbabaee, Mike E. Davies

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate our methods computational efficiency compared to the classical accelerated gradient method, and the variance-reduced stochastic gradient methods through numerical experiments in various large synthetic/real data sets.
Researcher Affiliation Academia 1Institute for Digital Communications, the University of Edinburgh, Edinburgh, UK. Correspondence to: Junqi Tang <J.Tang@ed.ac.uk>.
Pseudocode Yes Algorithm 1 Gradient Projection Iterative Sketch G(m, [η], [k]) Algorithm 2 Accelerated Gradient Projection Iterative Sketch A(m, [η], [k]) Algorithm 3 line-search scheme for GPIS and Acc-GPIS L(xi, ft(x), ft(xi), γu, γd)
Open Source Code No The paper only provides a link to a third-party implementation (SAGA) for comparison, not the source code for their proposed GPIS/Acc-GPIS methods.
Open Datasets Yes We first run an unconstrained least-squares regression on the Year-prediction (Million-song) data set from UCI Machine Learning Repository (Lichman, 2013)... Then we choose Magic04 Gamma Telescope data set from (Lichman, 2013)
Dataset Splits No The paper mentions synthetic and real datasets but does not specify the splits (e.g., percentages, counts, or methodology) for training, validation, or testing.
Hardware Specification Yes We run all the numerical experiments on a DELL laptop with 2.60 GHz Intel Core i7-5600U CPU and 1.6 GB RAM, MATLAB version R2015b.
Software Dependencies Yes MATLAB version R2015b.
Experiment Setup Yes The sketch size of our proposed methods for each experiments are list in Table 1. We implement the line-search scheme given by (Nesterov, 2007) and is described by Algorithm 3 for GPIS and Acc-GPIS in our experiments with parameters γu = 2, and γd = 2.