On Tensor Train Rank Minimization : Statistical Efficiency and Scalable Algorithm
Authors: Masaaki Imaizumi, Takanori Maehara, Kohei Hayashi
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor. |
| Researcher Affiliation | Academia | Masaaki Imaizumi Institute of Statistical Mathematics RIKEN Center for Advanced Intelligence Project Takanori Maehara RIKEN Center for Advanced Intelligence Project Kohei Hayashi National Institute of Advanced Industrial Science and Technology RIKEN Center for Advanced Intelligence Project |
| Pseudocode | No | The paper describes the ADMM and TT-RAM algorithms using mathematical equations and textual explanations, but it does not provide any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We apply the proposed tensor completion methods for analyzing the electricity consumption data [13]. and [13] M. Lichman. UCI machine learning repository, 2013. |
| Dataset Splits | No | The paper mentions 'We apply the proposed tensor completion methods for analyzing the electricity consumption data [13]... Then, we create X by randomly selecting n = 10, 000 elements from the the conditional distribution and regarding the other elements as missing.' and 'We select hyperparameters using a grid search with cross-validation.' While cross-validation implies a splitting strategy, specific train/validation/test split percentages or sample counts are not explicitly provided. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies or libraries with their version numbers. |
| Experiment Setup | Yes | We prepare two tensors of different size: an order-4 tensor of size 8 8 10 10 and an order-5 tensor of size 5 5 7 7 7. At the order-4 tensor, the TT rank is set as (R1, R2, R3) where R1, R2, R3 2 {3, 5, 7}. At the order-5 tensor, the TT rank is set as (R1, R2, R3, R4) where R1, R2, R3, R4 2 {2, 4}. For estimation, we set the size of Gk and k as 10, which is larger than the true TT rank. The regularization coefficient λn is selected from {1, 3, 5}. The parameters for random projection are set as s = 20 and D1 = D2 = 10. We discretize the value of the dataset into 10 values and set K 2 {5, 7, 8, 10}. |