Error Analysis of Tensor-Train Cross Approximation
Authors: Zhen Qin, Alexander Lidiak, Zhexuan Gong, Gongguo Tang, Michael B Wakin, Zhihui Zhu
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our results illustrate how the choice of selected subtensors affects the quality of the cross approximation and that the approximation error caused by model error and/or measurement error may not grow exponentially with the order of the tensor. These results are verified by numerical experiments and may have important implications for the usefulness of cross approximations for high-order tensors, such as those encountered in the description of quantum many-body states. |
| Researcher Affiliation | Academia | Zhen Qin Ohio State University qin.660@osu.edu Alexander Lidiak Colorado School of Mines alidiak@mines.edu Zhexuan Gong Colorado School of Mines gong@mines.edu Gongguo Tang University of Colorado gongguo.tang@colorado.edu Michael B. Wakin Colorado School of Mines mwakin@mines.edu Zhihui Zhu Ohio State University zhu.3440@osu.edu |
| Pseudocode | No | The paper does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide an unambiguous statement or direct link to its source code in the main text. |
| Open Datasets | No | The paper generates synthetic data for its experiments and does not use a publicly available dataset that requires access information. |
| Dataset Splits | No | The paper generates synthetic data and does not specify training, validation, or test splits. |
| Hardware Specification | No | The paper's self-assessment section explicitly states: "Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No]" |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies used in the experiments. |
| Experiment Setup | Yes | In this section, we conduct numerical experiments to evaluate the performance of the cross approximation (9) with noisy measurements. We generate an N-th order tensor T Rd1 d N approximately in the TT format as T = Tr + ηF, where Tr is in the TT format with mode ranks [r1, . . . , r N 1], which is generated from truncating a random Gaussian tensor using a sequential SVD [9], and F is a random tensor with independent entries generated from the normal distribution. We then normalize Tr and F to unit Frobenius norm. Thus η controls the low-rank approximation error. To simplify the selection of parameters, we let d = d1 = = d N = 2, r = r1 = = r N 1, and τ = τ1 = = τN 1 for the cross approximation (9). ... To control the signal-to-noise level, we scale the noise by a factor µ. Thus, we have e T = T + µE = Tr + ηF + µE, (11) where Tr 2 F = F 2 F = E E 2 F = 1. |