Tensor Wheel Decomposition and Its Tensor Completion Application

Authors: Zhong-Cheng Wu, Ting-Zhu Huang, Liang-Jian Deng, Hong-Xia Dou, Deyu Meng

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results elaborate that the proposed method is significantly superior to other tensor decomposition-based state-of-the-art methods on synthetic and real-world data, implying the merits of TW decomposition.In this section, we design substantial numerical experiments on synthetic and real-world data to verify the superiority of the proposed TW-TC method compared with others, which are constructed based on several commonly used tensor decompositions.
Researcher Affiliation Academia 1School of Mathematical Sciences, University of Electronic Science and Technology of China 2School of Science, Xihua University 3School of Mathematics and Statistics, Xi an Jiaotong University 4Pazhou Laboratory (Huangpu)
Pseudocode Yes Algorithm 1 The Alternating Least Squares Algorithm for TW Decomposition (TW-ALS).Algorithm 2 The Proximal Alternating Minimization (PAM)-Based Solver for TW-TC Model.
Open Source Code Yes The code is available at: https://github.com/zhongchengwu/code_TWDec.
Open Datasets Yes MSI Data. The tested MSI data sizes 200 200 31 (i.e., height width spectral), called Toy, which is cropped from the CAVE dataset2. 2https://www.cs.columbia.edu/CAVE/databases/multispectral/ Video Data. The tested video data contains two color videos3 (CVs): News and Container, and one hyperspectral video4 (HSV) [20]. 3http://trace.eas.asu.edu/yuv/ 4https://openremotesensing.net/knowledgebase/
Dataset Splits No Subsequently, the partially observed tensors are generated by random sampling with two sampling rates (SRs): 20%, 40%. Afterwards, the partially observed tensors are created by random sampling with three SRs: 5%, 10%, 20%. These describe how missing data is simulated for the tensor completion task, not traditional train/validation/test splits of a dataset.
Hardware Specification Yes All the experiments are implemented in MATLAB (R2021a) on a computer of 64Gb RAM and Intel(R) Core(TM) i9-10900KF CPU: @3.70 GHz.
Software Dependencies Yes All the experiments are implemented in MATLAB (R2021a)...
Experiment Setup Yes More specifically, the synthetic data consists of 64 third-order, 81 fourth-order, and 32 fifth-order tensors, whose sizes are {I1 I2 I3 : I1, I2, I3 {45, 50, 55, 60}}, {I1 I2 I3 I4 : I1, I2, I3, I4 {18, 20, 22}}, and {I1 I2 I3 I4 I5 : I1, I2, I3, I4, I5 {7, 8}}, and Tucker-ranks are (6, 6, 6), (5, 5, 5, 5), and (3, 3, 3, 3, 3), respectively. All synthetic data are numerically renormalized into [0, 1]. Subsequently, the partially observed tensors are generated by random sampling with two sampling rates (SRs): 20%, 40%. Regarding TW-ranks, we empirically assign R1 = L3 and R3 = L1 = L2 based on observations of multiple third-order real-world data experiments, and then select ranks R1, R2 and R3 from the candidate sets {3, 4, 5}, {10, 15, 20, 25} and {2, 3}, respectively.