A Blind Block Term Decomposition of High Order Tensors
Authors: Yunfeng Cai, Ping Li6868-6876
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical results show that our method is able to compute the BBTD, even in the presence of noise to some extent, whereas optimization based methods (e.g., MINF and NLS in TENSORLAB) may fail to converge. ... Numerical simulations show that our method outperforms state-of-the-arts optimization based methods. |
| Researcher Affiliation | Industry | Yunfeng Cai, Ping Li Cognitive Computing Lab Baidu Research No. 10 Xibeiwang East Road, Beijing 100193, China 10900 NE 8th St. Bellevue, Washington 98004, USA {caiyunfeng, liping11}@baidu.com |
| Pseudocode | Yes | The overall algorithm is summarized in Algorithm 1. Algorithm 1 SVDS for BBTD (SVDS4BBTD for short) |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code or provide a link to a code repository for the methodology described. |
| Open Datasets | No | We generate our data tensor as T = T 0 + σN = JS; A(1), . . . , A(N)K + σN , where T , T 0, N are the noisy tensor, noiseless tensor and noise tensor, respectively, A(n) s are random matrices, S is a random τ-block diagonal tensor with τ = (M (n) r ), N is a random tensor, σ 0 is parameter to control noise level. Here all random variables are i.i.d. from the standard normal distribution. |
| Dataset Splits | No | The paper describes how data tensors are generated for numerical experiments but does not provide specific details on how this generated data is split into training, validation, or test sets. |
| Hardware Specification | No | The paper states, 'All the numerical tests were carried out using MATLAB R2018a, with machine ϵ = 2.2 10 16.' but does not provide specific hardware details like CPU/GPU models, memory, or other computer specifications. |
| Software Dependencies | Yes | All the numerical tests were carried out using MATLAB R2018a, with machine ϵ = 2.2 10 16. We compare the performance of our algorithm with the nonlinear unconstrained optimization (MINF) and the nonlinear least squares algorithm (NLS) (Sorber, Barel, and Lathauwer 2012; Sorber, Van Barel, and De Lathauwer 2013, 2015), which are available in TENSORLAB 3.0 (available online at https://www.tensorlab.net). |
| Experiment Setup | No | The paper describes the data generation process with parameters (e.g., I, M(n)r, R, η) and some general experimental settings (e.g., 'generate the data 10 times', 'for each R, we perform our algorithm 20 times'), but it does not provide specific hyperparameters for any model training or a detailed, reproducible experimental setup beyond the data generation. |