Sparse and Low-Rank Tensor Decomposition
Authors: Parikshit Shah, Nikhil Rao, Gongguo Tang
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our algorithm with numerical experiments. |
| Researcher Affiliation | Collaboration | Parikshit Shah parikshit@yahoo-inc.com Nikhil Rao nikhilr@cs.utexas.edu Gongguo Tang gtang@mines.edu |
| Pseudocode | Yes | Algorithm 1 Algorithm for sparse and low rank tensor decomposition |
| Open Source Code | No | The paper does not provide any statement or link indicating the release of source code for the described methodology. |
| Open Datasets | No | A tensor Z is generated as the sum of a low rank tensor X and a sparse tensor Y . The low-rank component is generated as follows: Three sets of r unit vecots ui, vi, wi R50 are generated randomly, independently and uniformly distributed on the unit sphere. Also a random positive scale factor (uniformly distributed on [0, 1] is chosen and the tensor X = Pr i=1 λi ui vi wi. The tensor Y is generated by (Bernoulli) randomly sampling its entries with probability p. |
| Dataset Splits | No | The paper describes generating synthetic data for experiments but does not specify explicit train/validation/test splits with percentages or sample counts. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The optimization problem (6) is solved using CVX in MATLAB. No version numbers are provided for CVX or MATLAB. |
| Experiment Setup | Yes | In all our experiments, the regularization parameter was picked to be ν = 1 n. ... For each such p, we perform 10 trials... We run 5 independent trials |