Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery

Authors: Cun Mu, Bo Huang, John Wright, Donald Goldfarb

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We corroborate the improvement of square reshaping with numerical experiments on low-rank tensor completion (LRTC) for both noise-free (synthetic data) and noisy (real data) cases.
Researcher Affiliation Academia Cun Mu1 CM3052@COLUMBIA.EDU Bo Huang1 BH2359@COLUMBIA.EDU John Wright2 JOHNWRIGHT@EE.COLUMBIA.EDU Donald Goldfarb1 GOLDFARB@COLUMBIA.EDU 1Department of Industrial Engineering and Operations Research, Columbia University 2Department of Electrical Engineering, Columbia University
Pseudocode No The paper describes the use of algorithms like ALM and Frank-Wolfe but does not provide their pseudocode or structured algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about releasing its source code, nor does it include a link to a code repository for the described methodology.
Open Datasets No The paper mentions generating synthetic data with a described method and using three video datasets (Ocean video, Campus video, Face video) with their dimensions, and states a detailed description is in the appendix. However, it does not provide specific links, DOIs, repositories, or formal citations with author names and years for these video datasets to confirm their public availability.
Dataset Splits No The paper does not explicitly provide details on training/validation/test dataset splits, percentages, or absolute sample counts for data partitioning.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using specific algorithms and solvers (ALM, accelerated linearized Bregman algorithm, Frank-Wolfe algorithm) but does not provide their version numbers or any other specific software dependencies with version numbers.
Experiment Setup Yes We generate a four-way tensor X 0 Rn n n n as X 0 = c0 1 u1 2 u2 3 u3 4 u4, where c0 N(0, 1), and ui s are generated uniformly over the unit sphere Sn 1 = {x Rn | x 2 = 1}. The observed entries are chosen uniformly with ratio ρ. Since the unfolding matrix of X 0 along each mode has the same distribution, we set each λi = 1. Therefore, we compare the recovery performances between... We increase the problem size n from 10 to 30 with increment 1, and the observation ratio ρ from 0.01 to 0.2 with increment 0.01. For each (ρ, n)-pair, we simulate 10 test instances... For our square model, we set I = {1, 4} for the Ocean video, and set I = {1, 2} for the Campus and the Face videos, to construct more balanced embedded matrices X I. Due to the existence of noise in real data, we would solve the regularized least square problems... we set βi and β to their oracle values, i.e. βi = (X 0)(i) and β = (X 0)I , which can be reasonably considered as (nearly) optimal settings.