Interpolating Convex and Non-Convex Tensor Decompositions via the Subspace Norm
Authors: Qinqing Zheng, Ryota Tomioka
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct tensor denoising experiments on synthetic and real datasets, to numerically confirm our analysis in previous sections. |
| Researcher Affiliation | Academia | Qinqing Zheng University of Chicago qinqing@cs.uchicago.edu Ryota Tomioka Toyota Technological Institute at Chicago tomioka@ttic.edu |
| Pseudocode | Yes | We use Algorithm 2 described in Section 3. (Refers to Algorithm 1 in Appendix B: Algorithm 1 Tensor Denoising with Subspace Norm) |
| Open Source Code | No | The paper does not provide any links to open-source code or explicitly state that the code for the methodology is being released. |
| Open Datasets | Yes | The amino acid dataset [5] is a semi-realistic dataset commonly used as a benchmark for low rank tensor modeling. |
| Dataset Splits | No | The paper does not provide specific details on dataset splits (e.g., train/validation/test percentages, sample counts, or citations to predefined splits) or cross-validation methodology for reproducibility. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'tensorlab [22]' but does not provide a specific version number for this or any other software dependency. |
| Experiment Setup | Yes | The CP decomposition is computed by the tensorlab [22] with 20 random initializations. We assumed CP knows the true rank is 2. For the subspace norm, we use Algorithm 2 described in Section 3. We also select the top 2 singular vectors when constructing b U (k) s. We computed the solutions for 20 values of regularization parameter λ logarithmically spaced between 1 and 100. For the overlapped and the latent norm, we use ADMM described in [25]; we also computed 20 solutions with the same λ s used for the subspace norm. |