Near Optimal Sketching of Low-Rank Tensor Regression
Authors: Xingguo Li, Jarvis Haupt, David Woodruff
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We study the performance of sketching for tensor regression through numerical experiments over both synthetic and real data sets. |
| Researcher Affiliation | Academia | 1 University of Minnesota 2Georgia Tech 3Carnegie Mellon University |
| Pseudocode | No | The paper describes algorithms verbally (e.g., 'cyclic block-coordinate minimization algorithm') but does not provide structured pseudocode or an explicit algorithm block. |
| Open Source Code | No | The paper mentions 'Matlab Tensor Reg toolbox' with a URL [35], but this is a third-party tool used, not the open-source code for the specific methodology described in this paper (sketching for low-rank tensor regression). |
| Open Datasets | Yes | We also examine sketching of low-rank tensor regression on a real dataset of MRI images [22]. The dataset consists of 56 frames of a human brain, each of which is of dimension 128 x 128 pixels, i.e., p1 = p2 = 128 and p3 = 56. [22] Antoine Rosset, Luca Spadola, and Osman Ratib. Osirix: an open-source software for navigating in multidimensional DICOM images. Journal of Digital Imaging, 17(3):205 216, 2004. |
| Dataset Splits | No | The paper does not explicitly provide training/test/validation dataset splits, specific percentages, or sample counts, nor does it mention cross-validation setups. |
| Hardware Specification | No | All results are run on a supercomputer due to the large scale of the data. |
| Software Dependencies | Yes | For solving the OLS problem for tensor regression (2), we use a cyclic block-coordinate minimization algorithm based on a tensor toolbox [35]. The reference [35] is 'Hua Zhou. Matlab Tensor Reg toolbox. http://hua-zhou.github.io/softwares/ tensorreg/, 2013.' |
| Experiment Setup | Yes | For synthetic data, we generate the low-rank tensor as follows. For each d 2 [D], we generate R random columns with N(0, 1) entries to form non-orthogonal tensor factors d = [ (1) d , . . . , (R) d ] of [[ 1, . . . , D]] 2 SD,R independently. We also generate R real scalars 1, . . . , R uniformly and independently from [1, 10]. Then is formed by = PR D . The n tensor designs {Ai}n i=1 are generated independently with i.i.d. N(0, 1) entries for 10% of the entries chosen uniformly at random, and the remaining entries are set to zero. We also generate the noise z to have i.i.d. N(0, σ2 z) entries, and the generation of the SJLT matrix Φ follows Definition 5. For both OLS and SOLS, we use random initializations for , i.e., d has i.i.d. N(0, 1) entries for all d 2 [D]. For the noiseless case, i.e., σz = 0, we choose R = 3, p1 = p2 = p3 = 100, m = 5 R(p1 + p2 + p3) = 4500, and s = 200. Different values of n = 104, 105, and 106 are chosen to compare both statistical and computational performances of OLS and SOLS. For the noisy case, the settings of all parameters are identical to those in the noiseless case, except that σz = 1. |