Nonconvex Low-Rank Tensor Completion from Noisy Data
Authors: Changxiao Cai, Gen Li, H. Vincent Poor, Yuxin Chen
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We carry out a series of numerical experiments to corroborate our theoretical findings. Fig. 1 shows the numerical estimation errors vs. iteration count t in a typical Monte Carlo trial. Fig. 2 plots the empirical success rates over 100 independent trials. We report in Fig. 3 three types of squared relative errors... vs. SNR. |
| Researcher Affiliation | Academia | Changxiao Cai Princeton University Gen Li Tsinghua University H. Vincent Poor Princeton University Yuxin Chen Princeton University |
| Pseudocode | Yes | Algorithm 1 Gradient descent for nonconvex tensor completion, Algorithm 2 Spectral initialization for nonconvex tensor completion, Algorithm 3 Retrieval of low-rank tensor factors from a given subspace estimate. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper states: 'We generate the truth T = P 1 i r u 3 i randomly with u i i.i.d. N(0, Id).', indicating synthetic data generation without providing access information for a publicly available dataset. |
| Dataset Splits | No | The paper generates synthetic data for experiments and averages results over Monte Carlo trials, but it does not provide specific training/validation/test dataset splits, percentages, or sample counts. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU models, or memory specifications used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, or specific solvers) needed to replicate the experiment. |
| Experiment Setup | Yes | The learning rates, the restart number and the pruning threshold are taken to be ηt 0.2, L = 64, ϵth = 0.4. Set d = 100, r = 4 and p = 0.1. Take t0 = 100, d = 100, r = 4 and p = 0.1. |