Fast and accurate randomized algorithms for low-rank tensor decompositions
Authors: Linjian Ma, Edgar Solomonik
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that this new ALS algorithm, combined with a new initialization scheme based on the randomized range finder, yields decomposition accuracy comparable to the standard higher-order orthogonal iteration (HOOI) algorithm. The new algorithm achieves up to 22.0% relative decomposition residual improvement compared to the state-of-the-art sketched randomized algorithm for Tucker decomposition of various synthetic and real datasets. |
| Researcher Affiliation | Academia | Linjian Ma Department of Computer Science University of Illinois at Urbana Champaign lma16@illinois.edu Edgar Solomonik Department of Computer Science University of Illinois at Urbana Champaign solomon2@illinois.edu |
| Pseudocode | Yes | Algorithm 1 Sketch-Tucker-ALS: Sketched ALS procedure for Tucker decomposition. Algorithm 2 RSVD-LRLS: Low-rank approximation of least squares solution via randomized SVD |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for their methodology is open-source or publicly available. |
| Open Datasets | Yes | We test on two image datasets, COIL-100 [46] and a Time-Lapse hyperspectral radiance images dataset called Souto wood pile [44], both have been used previously as a tensor decomposition benchmark [7, 70, 36]. |
| Dataset Splits | No | The paper mentions running 5 ALS sweeps and calculating fitness, but does not specify a train/validation/test split for the datasets used in experiments. It uses synthetic and real datasets, and implicitly evaluates performance on these, but doesn't define how the data is partitioned for training, validation, or testing. |
| Hardware Specification | Yes | Our experiments are carried out on an Intel Core i7 2.9 GHz Quad-Core machine using Num Py [50] routines in Python. |
| Software Dependencies | No | Our experiments are carried out on an Intel Core i7 2.9 GHz Quad-Core machine using Num Py [50] routines in Python. While NumPy and Python are mentioned, specific version numbers for these software dependencies are not provided. |
| Experiment Setup | Yes | For all the experiments, we run 5 ALS sweeps unless otherwise specified, and calculate the fitness based on the output factor matrices as well as the core tensor. For each randomized algorithm, we set the sketch size to be KR2. The constant factor K reveals the accuracy of each subproblem. For the randomized SVD routine in Algorithm 2, we set the dimension sizes of the random matrix S as s (R + 5), where the oversampling size is 5. For all experiments, the Tucker rank is 5 5 5 and the sketch size parameter K = 16. For synthetic tensors, we set α = 1.6. |