Low-Rank Tucker Decomposition of Large Tensors Using TensorSketch

Authors: Osman Asif Malik, Stephen Becker

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our algorithms on sparse synthetic data and compare them to multiple other methods. We also apply one of our algorithms to a real dense 38 GB tensor representing a video and use the resulting decomposition to correctly classify frames containing disturbances.
Researcher Affiliation Academia Osman Asif Malik Department of Applied Mathematics University of Colorado Boulder osman.malik@colorado.edu Stephen Becker Department of Applied Mathematics University of Colorado Boulder stephen.becker@colorado.edu
Pseudocode Yes Algorithm 1: TUCKER-ALS (aka HOOI), Algorithm 2: TUCKER-TS (proposal), Algorithm 3: TUCKER-TTMTS (proposal)
Open Source Code Yes A Matlab implementation of our algorithms can be found at https://github.com/Osman Malik/tucker-tensorsketch.
Open Datasets No The paper mentions using 'sparse synthetic data' and 'a real dense 38 GB tensor representing a video', but does not provide concrete access information (link, DOI, repository, or formal citation with authors/year) for a publicly available version of the video dataset. The GitHub link is for code, not the raw video data itself.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, or test sets.
Hardware Specification Yes All synthetic results are averages over ten runs in an environment using four cores of an Intel Xeon E5-2680 v3 @2.50GHz CPU and 21 GB of RAM. This work utilized the RMACC Summit supercomputer, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder, and Colorado State University.
Software Dependencies Yes Our Matlab implementation that we provided a link to at the beginning of Section 3 comes with demo script files for running experiments similar to those presented here. We compare our algorithms to TUCKER-ALS and MET in Tensor Toolbox version 2.6 [3, 19]
Experiment Setup Yes Unless stated otherwise, we stop after 50 iterations or when the change in G is less than 1e-3. We have found that the choice J1 = KRN 1 and J2 = KRN, for a constant K > 4, works well in practice. ...unless stated otherwise, always use K = 10. For both plots, the tensor size is 500 500 500 with nnz(Y) 1e+6 and true rank (15, 15, 15). The algorithms use a target rank of (10, 10, 10). We add i.i.d. normally distributed noise with standard deviation 1e-3 to all nonzero tensor elements. A convergence tolerance of 1e-1 is used for these experiments. We compute a rank (10, 10, 10) Tucker decomposition of the tensor using TUCKER-TTMTS with the sketch dimension parameter set to K = 100 and a maximum of 30 iterations.