An Efficient, Sparsity-Preserving, Online Algorithm for Low-Rank Approximation

Authors: David Anderson, Ming Gu

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numeric experiments illustrate that SRLU preserves sparsity, highlights important data features and variables, can be efficiently updated, and calculates data approximations nearly as accurately as possible. To the best of our knowledge this is the first practical variant of the LU factorization for effective and efficient low-rank matrix approximation.
Researcher Affiliation Academia 1University of California, Berkeley. Correspondence to: David Anderson <davidanderson@berkeley.edu>, Ming Gu <mgu@berkeley.edu>.
Pseudocode Yes Algorithm 1 The LU factorization; Algorithm 2 Crout LU; Algorithm 3 TRLUCP; Algorithm 4 Spectrum-Revealing Pivoting (SRP)
Open Source Code No The paper does not provide any concrete access information (e.g., repository link, explicit statement of release) to the source code for the methodology described.
Open Datasets Yes We test the spectral norm error of these algorithms on matrices from the sparse matrix collection in (Davis & Hu, 2011). ... The university of florida sparse matrix collection. ACM Transactions on Mathematical Software, 38:1:1 1:25, 2011. URL http://www.cise.ufl.edu/research/sparse/matrices.
Dataset Splits Yes The data contains 39,861 documents and 28,102 words/terms, and an initial SRLU factorization of rank 20 was performed on the first 30K documents.
Hardware Specification No All numeric experiments were run on NERSC’s Edison. This is a general name for a supercomputer but does not provide specific hardware details like CPU/GPU models or memory.
Software Dependencies No The paper mentions software like 'LAPACK' and 'PROPACK' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Note that for Subspace Iteration, we choose iteration parameter q = 0 and do not measure the time of applying the random projection... Using tuning parameter f = 5, no swaps were needed in all cases.