Provable Online CP/PARAFAC Decomposition of a Structured Tensor via Dictionary Learning
Authors: Sirisha Rambhatla, Xingguo Li, Jarvis Haupt
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Complementary to our theoretical results, our synthetic and real-world data evaluations showcase superior performance compared to related techniques. |
| Researcher Affiliation | Academia | 1 Computer Science Department, University of Southern California 2 Computer Science Department, Princeton University 3 Department of Electrical and Computer Engineering, University of Minnesota Twin Cities |
| Pseudocode | Yes | Algorithm 1 Tensor NOODL: Neurally plausible alternating Optimization-based Online Dictionary Learning for Tensor decompositions. Algorithm 2 Untangle Khatri-Rao Product (KRP): Recovering the Sparse factors |
| Open Source Code | Yes | Corresponding code is available at https://github.com/srambhatla/Tensor NOODL. |
| Open Datasets | No | The paper describes generating synthetic data and using specific real-world datasets (NBA Shot Pattern Dataset, Enron data) but does not provide specific links, DOIs, repositories, or formal citations for public access to these datasets as used in their experiments. |
| Dataset Splits | No | The paper describes synthetic data generation parameters and real-world data usage (weekly NBA data), but does not specify explicit train/validation/test dataset splits (percentages, sample counts, or citations to predefined splits) in the main text. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | Experimental set-up: We compare Tensor NOODL with online dictionary learning algorithms presented in [19] (Arora(b) (incurs bias) and Arora(u) (claim no bias)), and [20], which can be viewed as a variant of ALS (matricized) 4. We analyze the recovery performance of the algorithms across different choices of tensor dimensions J = K = {100, 300, 500} for a fixed n = 300, rank m = {50, 150, 300, 450, 600}, and the sparsity parameters α = β = {0.005, 0.01, 0.05} of factors B (t) and C (t), across 3 Monte-Carlo runs 5. Parameters ηA, ηx, τ, T, C, and R as per A.3, A.5, and A.6. |