Low Rank Approximation using Error Correcting Coding Matrices

Authors: Shashanka Ubaru, Arya Mazumdar, Yousef Saad

ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6. Numerical Experiments The following experiments will illustrate the performance of subsampled code matrices as sampling matrices in Algorithm 1. Our first experiment is with a 4770 4770 matrix named Kohonen from the Pajek network (a directed graph s matrix representation), available from the UFL Sparse Matrix Collection (Davis & Hu, 2011). ... Figure 1 gives the actual error eℓ= A Q(ℓ)(Q(ℓ)) A for each ℓnumber of samples when a subsampled dual BCH code matrix, a Gaussian matrix, SRFT and SRHT matrices are used as sampling matrices, respectively. ... Table 1 compares the errors eℓfor ℓnumber of samples, obtained for a variety of input matrices from different applications when subsampled dual BCH code, Gaussian and SRFT matrices were used. ... Eigenfaces: Eigenfaces is a popular method for face recognition that is based on Principal Component Analysis (PCA) (Turk & Pentland, 1991; Sirovich & Meytlis, 2009). In this experiment (chosen as a verifiable comparison with results in (Gu, 2014)), we demonstrate the performance of randomized algorithm with different sampling matrices on face recognition. The face dataset is obtained from the AT&T Labs Cambridge database of faces (Cambridge, 2002).
Researcher Affiliation Academia Shashanka Ubaru UBARU001@UMN.EDU Arya Mazumdar ARYA@UMN.EDU Yousef Saad SAAD@CS.UMN.EDU University of Minnesota-Twin Cities, MN USA
Pseudocode Yes Algorithm 1 Prototype Algorithm Input: An m n matrix A, a target rank k and an oversampling parameter p. Output: Rank-k factors U, Σ, and V in an approximate SVD A UΣV . 1. Form an n ℓsubsampled code matrix Ω, as described in Section 3 and (4), using an [ℓ, r] linear coding scheme, where ℓ= k + p and r log2 n . 2. Form the m ℓsample matrix Y = AΩ. 3. Form an m ℓorthonormal matrix Q such that Y = QR. 4. Form the ℓ n matrix B = Q A. Here B is the low rank approximation of input matrix A. 5. Compute the SVD of the small matrix B = ˆUΣV . 6. Form the matrix U = Q ˆU.
Open Source Code No No, the paper does not provide concrete access to source code for the methodology described in this paper. No statement about code release or repository links found.
Open Datasets Yes Our first experiment is with a 4770 4770 matrix named Kohonen from the Pajek network (a directed graph s matrix representation), available from the UFL Sparse Matrix Collection (Davis & Hu, 2011). ... The face dataset is obtained from the AT&T Labs Cambridge database of faces (Cambridge, 2002). URL http://www.cl.cam.ac.uk/ research/dtg/attarchive/facedatabase. html.
Dataset Splits No No, the paper does not provide specific validation dataset split information. It mentions '200 of these faces, 5 from each individual are used as training images and the remaining 200 as test images to classify', indicating train and test splits, but no explicit validation split.
Hardware Specification No No, the paper does not provide specific hardware details. It only states 'All experiments were implemented in matlab v8.1.'
Software Dependencies Yes All experiments were implemented in matlab v8.1. ... We used the in-built MATLAB function classify for feature training and classification.
Experiment Setup Yes Input: An m n matrix A, a target rank k and an oversampling parameter p. ... The subsampled code matrices given in (4), generated from a chosen coding scheme is used as the sampling test matrix. ... with different sampling matrix Ωand different p values. ... for rank k < 40, p = 20 our results are superior.