Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Guaranteed Nonconvex Factorization Approach for Tensor Train Recovery

Authors: Zhen Qin, Michael B. Wakin, Zhihui Zhu

JMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct various experiments to validate our theoretical findings. ... In this section, we conduct numerical experiments to evaluate the performance of the RGD algorithm for tensor train sensing and completion. In all the experiments, we generate an order-N ground truth tensor X Rd1 d N in TT format with ranks r = (r1, . . . , r N 1) by first generating a random Gaussian tensor with i.i.d. entries from the normal distribution, and then using the sequential SVD algorithm to obtain a TT format tensor, which is finally normalized to unit Frobenius norm, i.e., X F = 1. To simplify the selection of parameters, we set d = d1 = = d N and r = r1 = = r N 1. For the RGD algorithm in (26) and (27), we set µ = 0.5 to compute factors. To avoid the high computational complexity associated with σ2(X ), we replace it with X 2 F in the RGD. For each experimental setting, we conduct 20 Monte Carlo trials and then take the average over the 20 trials to report the results.
Researcher Affiliation Academia Zhen Qin EMAIL Department of Computer Science and Engineering Ohio State University Columbus, Ohio 43201, USA. Michael B. Wakin EMAIL Department of Electrical Engineering Colorado School of Mines Golden, Colorado 80401, USA. Zhihui Zhu EMAIL Department of Computer Science and Engineering Ohio State University Columbus, Ohio 43201, USA.
Pseudocode No The paper describes the RGD algorithm using equations (13), (14), (26), and (27), along with textual descriptions of the steps. However, it does not present a formally labeled 'Pseudocode' or 'Algorithm' block with structured steps.
Open Source Code No The paper includes a license statement: 'License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v25/24-0029.html.' This refers to the paper's publication license and not to the source code for the methodology presented in the paper. No other explicit statement or link to source code is provided.
Open Datasets No In all the experiments, we generate an order-N ground truth tensor X Rd1 d N in TT format with ranks r = (r1, . . . , r N 1) by first generating a random Gaussian tensor with i.i.d. entries from the normal distribution, and then using the sequential SVD algorithm to obtain a TT format tensor, which is finally normalized to unit Frobenius norm, i.e., X F = 1. ... We conduct 100 independent trials to evaluate the success rate for each pair of N and m.
Dataset Splits No The paper states: 'In all the experiments, we generate an order-N ground truth tensor X Rd1 d N ... by first generating a random Gaussian tensor with i.i.d. entries from the normal distribution...' and 'For each experimental setting, we conduct 20 Monte Carlo trials...'. This indicates data is generated for each experiment, rather than splitting a predefined dataset into train/test/validation sets.
Hardware Specification No We acknowledge funding support from NSF Grants No. CCF-1839232, CCF-2106834, CCF-2241298 and ECCS-2409701. We thank the Ohio Supercomputer Center for providing the computational resources needed in carrying out this work.
Software Dependencies No Elements of matrices and tensors are denoted in parentheses, as in Matlab notation.
Experiment Setup Yes To simplify the selection of parameters, we set d = d1 = = d N and r = r1 = = r N 1. For the RGD algorithm in (26) and (27), we set µ = 0.5 to compute factors. To avoid the high computational complexity associated with σ2(X ), we replace it with X 2 F in the RGD. For each experimental setting, we conduct 20 Monte Carlo trials and then take the average over the 20 trials to report the results. ... A ground truth matrix X R30 20 of rank r is generated... The step size for both the GD and RGD algorithms is set to µ = 0.6. For each experimental setting, we set m = 150r, conduct 20 Monte Carlo trials, and then take the average over the 20 trials to report the results. ... We set d = 4 and r = 2, and then assess the performance across various tensor orders N. ... We fix the number of measurements m = 500 and vary the tensor order N and noise level γ2.