Learning-Based Low-Rank Approximations

Authors: Piotr Indyk, Ali Vakilian, Yang Yuan

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that, for multiple types of data sets, a learned sketch matrix can substantially reduce the approximation loss compared to a random matrix S, sometimes by one order of magnitude.
Researcher Affiliation Academia Piotr Indyk CSAIL, MIT indyk@mit.edu; Ali Vakilian University of Wisconsin Madison vakilian@wisc.edu; Yang Yuan Tsinghua University yuanyang@tsinghua.edu.cn
Pseudocode Yes Algorithm 1 Rank-k approximation of a matrix A using a sketch matrix S; Algorithm 2 Differentiable SVD implementation
Open Source Code No The paper does not provide concrete access to source code for the methodology described. It only mentions using PyTorch for implementation but does not share its own code.
Open Datasets Yes Videos6: Logo, Friends, Eagle. We downloaded three high resolution videos from Youtube, including logo video, Friends TV show, and eagle nest cam. From each video, we collect 500 frames of size 1920 1080 3 pixels, and use 400 (100) matrices as the training (test) set. For each frame, we resize it as a 5760 1080 matrix. (Footnote 6: They can be downloaded from http://youtu.be/L5HQo FIa T4I, http://youtu.be/xm LZs Ef XEg E and http://youtu.be/ufnf_q_3Ofg); Hyper. We use matrices from HS-SOD, a dataset for hyperspectral images from natural scenes [Imamoglu et al., 2018].; Tech. We use matrices from Tech TC-300, a dataset for text categorization [Davidov et al., 2004].
Dataset Splits No From each video, we collect 500 frames of size 1920 1080 3 pixels, and use 400 (100) matrices as the training (test) set. The paper specifies training and test sets but does not mention a separate validation set.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments. It only mentions training times without hardware specifications.
Software Dependencies No We used the autograd feature in Py Torch to numerically compute the gradient. The paper mentions PyTorch but does not provide specific version numbers for PyTorch or other software dependencies.
Experiment Setup No The paper mentions using stochastic gradient descent and optimizing non-zero entries, but lacks specific experimental setup details such as learning rates, batch sizes, number of epochs for general training, or other hyperparameters. It mentions "running for 3000 iterations" for a specific plot (Figure 5) but not as a general experimental setup detail.