Hybrid Singular Value Thresholding for Tensor Completion

Authors: Xiaoqin Zhang, Zhengyuan Zhou, Di Wang, Yi Ma

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental Evaluation To validate the effectiveness of the proposed tensor completion algorithm, we conduct two comparison experiments as follows: (1) the proposed metric versus tensor nuclear norm; (2) the proposed metric versus matrix nuclear norm. All the experiments are conducted with MATLAB on a platform with Pentium IV 3.2GHz CPU and 1G memory.
Researcher Affiliation Academia 1Institute of Intelligent System and Decision, Wenzhou University, Zhejiang, China 2Department of Electrical Engineering, Stanford University, CA, USA 3Department of Electrical and Computer Engineering, Shanghai Tech University, Shanghai, China
Pseudocode Yes Algorithm 1 Hybrid Threshold Computation
Open Source Code No The paper does not provide any statement about releasing open-source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets No We first randomly generate a pure low-rank tensor Lo R50 50 50 whose Tucker rank (defined in the introduction) is (2,2,2) (the same set is adopted in (Liu et al. 2013)). ... we generate a low-rank tensor Lo R30 30 30 30 with Tucker rank (2,2,2,2) (the same set is adopted in (Mu et al. 2013)).
Dataset Splits No The paper mentions 'we sample a fraction c of elements in Lo as the observations' and varies 'c' but does not specify explicit train/validation/test splits with percentages or counts, nor does it reference standard predefined splits for the generated data.
Hardware Specification Yes All the experiments are conducted with MATLAB on a platform with Pentium IV 3.2GHz CPU and 1G memory.
Software Dependencies No The paper mentions 'MATLAB' but does not provide a specific version number for it or any other software dependencies.
Experiment Setup No The paper does not provide specific hyperparameter values or detailed training configurations (e.g., learning rates, batch sizes, optimizer settings) for the proposed method's experimental setup. It only states that comparison methods' parameters were chosen for best performance.