A Dual Framework for Low-rank Tensor Completion

Authors: Madhav Nimishakavi, Pratik Kumar Jawanpuria, Bamdev Mishra

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiments illustrate the efficacy of the proposed algorithm on several real-world datasets across applications.
Researcher Affiliation Collaboration Indian Institute of Science, India Microsoft, India
Pseudocode Yes Algorithm 1 Proposed Riemannian trust-region algorithm for (7). Input: YΩ, rank (r1, . . . , r K), regularization parameter λ, and tolerance ϵ. Initialize : u M. repeat 1: Compute the gradient uℓfor (7) as given in Lemma 1. 2: Compute the search direction which minimizes the trust-region subproblem. It makes use of uℓand its directional derivative presented in Lemma 1 for (7). 3: Update x with the retraction step to maintain strict feasibility on M. Specifically for the spectrahedron manifold, Uk (Uk + Vk)/ Uk + Vk F , where Vk is the search direction. until uℓ F < ϵ. Output: u
Open Source Code Yes Our codes are available at https://pratikjawanpuria.com/.
Open Datasets Yes a) Ribeira is a hyperspectral image [16] of size 1017 1340 33, where each slice represents the image measured at a particular wavelength. We re-size it to 203 268 33 [37, 26, 24]; b) Tomato is a video sequence dataset [27, 8] of size 242 320 167; and c) Baboon is an RGB image [49], modeled as a 256 256 3 tensor.
Dataset Splits Yes We set λk = λnk k in (7). Hence, we tune only one hyper-parameter λ, from the set {10 3, 10 2, . . . , 103}, via five-fold cross-validation of the training data.
Hardware Specification No The paper mentions that the algorithm is implemented using Manopt in Matlab but does not specify any details about the hardware used for experiments (e.g., GPU/CPU models, memory).
Software Dependencies No Our algorithm is implemented using the Manopt toolbox [7] in Matlab, which has off-the-shelf generic TR implementation. The paper mentions software but does not provide version numbers for Manopt or Matlab.
Experiment Setup Yes We set λk = λnk k in (7). Hence, we tune only one hyper-parameter λ, from the set {10 3, 10 2, . . . , 103}, via five-fold cross-validation of the training data.