Handling Slice Permutations Variability in Tensor Recovery

Authors: Jingjing Zheng, Xiaoqin Zhang, Wenzhe Wang, Xianta Jiang3499-3507

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section includes three parts: in the first two parts, we compared the proposed algorithm (TRPCA-SPV) with several existing state-of-the-art tensor recovery methods (including RPCA2(Candès et al. 2011), SNN3(Gandy, Recht, and Yamada 2011), Liu s work 3(called Liu for short)(Candes and Plan 2010) and TRPCA3 (Lu et al. 2019)) on image sequence recovery task and image classification task to evaluate the effectiveness of the algorithms regarding alleviating SPV problem on tensor recovery. And the third part was conducted in order to evaluate the performance of TRPCA-SPV with different values of the parameter κ.
Researcher Affiliation Academia Jingjing Zheng1,2, Xiaoqin Zhang2 , Wenzhe Wang2, Xianta Jiang1 1 Department of Computer Science, Memorial University of Newfoundland, Newfoundland and Labrador, Canada 2 College of Computer Science and Artificial Intelligence, Wenzhou University, Zhejiang, China
Pseudocode Yes Algorithm 1: Tensor Singular Value Thresholding (t SVT) ... Algorithm 2: Tensor recovery for SPV (TRSPV) ... Algorithm 3: TRPCA for SPV (TRPCA-SPV)
Open Source Code No The paper provides links to the source code of compared methods (RPCA, SNN, Liu) in footnotes but does not provide a link or explicit statement about the availability of the source code for their proposed TRSPV/TRPCA-SPV algorithms.
Open Datasets Yes In this part, all five methods were tested on two hyperspectral image databases including Pavia University 4 and Botswana4. ... Image Classification In this part, image classification was conducted on two datasets including ORL database5 and CMU PIE database 6.
Dataset Splits No The paper mentions a train/test split for image classification: "For each dataset, 90% of samples were randomly selected as training set, and the rest were taken as testing set." However, it does not mention a distinct validation set or provide specific split percentages/counts for all datasets used, particularly for the image sequence recovery task.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments were provided in the paper.
Software Dependencies No The paper mentions using and comparing against existing methods like RPCA, SNN, Liu, and TRPCA, and footnotes link to their GitHub repositories for 'Robust PCA' and 'Lib ADMM-toolbox'. However, it does not provide specific version numbers for any of these software packages, libraries, or frameworks required for reproducibility (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For Pavia University, we empirically set λ = 1/ p max(n1, n2) for RPCA (which deal with each band separately), λ = [ 240 3 ] for SNN, and λ = 330 [0.2, 0.1, 0.7] for Liu. For Botswana, we empirically set λ = 0.9/ p max(n1, n2) for RPCA (which deal with each band separately), λ = [ 340 3 ] for SNN, and λ = 370 [0.3, 0.1, 0.6] for Liu. For TRPCA, the parameter λ was tuned to λ = 0.9/ p max(n1, n2)n3 and λ = 0.8/ p max(n1, n2)n3 for Pavia University and Botswana respectively, in which n3 is the number of spectral bands. For TRPCA-SPV, the parameter λ was tuned to λ = 0.9/ p max(n1, n2)n3 for the two databases. ... For RPCA and TRPCA, the parameter λ was set to λ = 1/ p max(n1n2, n3) and λ = 1/ p max(n1, n2)n3 respectively as suggested in (Lu et al. 2019), in which n3 was the number of samples. For TRPCA-SPV, the parameter λ was set to λ = 1/ p max(n1, n2)n3 as well. ... The experiments for each parameter κ were repeated 10 times.