Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Iterative Hessian Sketch: Fast and Accurate Solution Approximation for Constrained Least-Squares
Authors: Mert Pilanci, Martin J. Wainwright
JMLR 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate our general theory with simulations for both unconstrained and constrained versions of least-squares, including ℓ1-regularization and nuclear norm constraints. We also numerically demonstrate the practicality of our approach in a real face expression classification experiment. |
| Researcher Affiliation | Academia | Mert Pilanci EMAIL Department of Electrical Engineering and Computer Science University of California Berkeley, CA 94720-1776, USA Martin J. Wainwright EMAIL Department of Electrical Engineering and Computer Science Department of Statistics University of California Berkeley, CA 94720-1776, USA |
| Pseudocode | Yes | Iterative Hessian sketch (IHS): Given an iteration number N 1: (1) Initialize at x0 = 0. (2) For iterations t = 0, 1, 2, . . . , N 1, generate an independent sketch matrix St+1 Rm n, and perform the update xt+1 = arg min x C 2m St+1A(x xt) 2 2 AT (y Axt), x o . (25) (3) Return the estimate bx = x N. |
| Open Source Code | No | The paper does not contain an explicit statement about the release of source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We performed a simulation study using the The Japanese Female Facial Expression (JAFFE) database (Lyons et al., 1998). |
| Dataset Splits | Yes | We performed an approximately 80 : 20 split of the data set into ntrain = 170 training and ntest = 43 test images respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for running its experiments. It mentions 'computational gains' and 'running times' but not the underlying hardware. |
| Software Dependencies | No | The paper mentions various algorithms and methods (e.g., 'homotopy algorithm', 'LARS updates', 'FISTA') but does not specify any software libraries, packages, or solvers with their version numbers that were used for implementation or experiments. |
| Experiment Setup | Yes | We ran the IHS algorithm using m = 6d samples per iteration, and for a total of N = 1 + log p n / log 2 = 4 iterations. (...) we generate observations from the linear model y = Ax + w, where x has at most s non-zero entries, and each row of the data matrix A Rn d is distributed i.i.d. according to a N(1d, Σ) distribution. (...) Setting a sparsity s = 3 log(d) , we chose the unknown regression vector x with its support uniformly random with entries 1 s with equal probability. (...) For comparison, we implemented the IHS algorithm with a projection dimension m = 4s log(d) . After projecting the data, we then used the homotopy method to solve the projected sub-problem at each step. In each trial, we ran the IHS algorithm for N = log n iterations. |