Common and Discriminative Subspace Kernel-Based Multiblock Tensor Partial Least Squares Regression

Authors: Ming Hou, Qibin Zhao, Brahim Chaib-draa, Andrzej Cichocki

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, to show the effectiveness and advantages of our approach, we test it on the real-life regression task in computer vision, i.e., reconstruction of human pose from multiview video sequences.
Researcher Affiliation Academia Laval University, Quebec, Canada 2RIKEN Brain Science Institute, Wako, Japan 3Shanghai Jiao Tong University, Shanghai, China 4Skolkovo Institute of Science and Technology, Moscow, Russia
Pseudocode Yes Algorithm 1 Our Kernel-based Multiblock Tensor Partial Least Squares (KMTPLS)
Open Source Code No The paper does not provide any explicit statement or link for open-source code for the described methodology.
Open Datasets Yes The dataset was taken from the Utrecht Multi-Person Motion (UMPM) benchmark (Van Der Aa et al. 2011)... carried out on the Berkeley Multimodal Human Action Database (MHAD) (Ofli et al. 2013)
Dataset Splits Yes For each scenario, we split the video sequence into two different partitions, i.e., a training set from the first 1/3 part and a test set from the remaining 2/3 part, respectively. The crossvalidation applied on training set was performed to select all the desired tuning parameters.
Hardware Specification No The paper does not specify any hardware details like GPU/CPU models, processors, or specific cloud resources used for the experiments.
Software Dependencies No The paper mentions that "the polynomial kernel function of second degree was employed" but does not specify any software names with version numbers.
Experiment Setup Yes To be fair, the maximal possible number of latent vectors that can be extracted from each individual block was set as 10 for both KMTPLS and MMCR. The optimal number of common and discriminative latent vectors was selected by crossvalidation on the training set. The maximal number of latent vectors from each block for both models was fixed as 8. Finally, the total number of predictor tensor blocks T grew up to 4, and the importance parameter α for each block was simply fixed to be 1/T.