Fast Recursive Low-rank Tensor Learning for Regression

Authors: Ming Hou, Brahim Chaib-draa

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 3 Experimental Results In our experiments, the root mean squares of prediction (RMSEP) [Kim et al., 2005] as well as the Q index [Luo et al., 2015] are used to quantitatively gauge the predictive performance of our approach. We recorded the CPU learning time per new mini-batch for all recursive methods, and we also gave CPU learning time for batch methods using the entire training set. We compared RHOPLS with NPLS [Bro, 1996], RNPLS [Eliseyev and Aksenova, 2013], HOPLS [Zhao et al., 2013] and IHOPLS [Hou and Chaib-draa, 2016] on general tensorial sequences with no special structures assumed in contrast to spatio-temporal data.
Researcher Affiliation Academia Ming Hou and Brahim Chaib-draa Department of Computer Science and Software Engineering, Laval University, Quebec, Canada ming.hou.1@ulaval.ca, brahim.chaib-draa@ift.ulaval.ca
Pseudocode No The paper describes its method in several “Step” sections (e.g., Step 0: Initial Approximation, Step 1: Incremental Approximation) but these are descriptive paragraphs, not structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link regarding the availability of its source code.
Open Datasets Yes We first tested RHOPLS on the Utrecht Multi-Person Motion (UMPM) benchmark [Van Der Aa et al., 2011], which provides the simultaneous recordings of video sequences and 3D ground truth positions of human natural motions in daily life activities. ... the tests were carried out on a benchmark tensor regression application, that is, decoding limb movements from monkey s brain signals using Neurotycho food tracking Electrocorticography (ECo G) dataset [Chao et al., 2010].
Dataset Splits Yes One half of the shuffled sequence served as training set while the remaining half was used for test. The optimal hyper-parameters of all methods were determined by crossvalidation, so that their best performance could be exhibited balancing between speed and accuracy.
Hardware Specification Yes All tests were done on a server of 12 cores 3.20GHz CPU.
Software Dependencies No The paper does not specify software dependencies with version numbers used for its implementation or experiments.
Experiment Setup Yes For RHOPLS, the initial number of latent vectors F and initial input loadings L and output loadings K are needed to be tuned, so are the incremental loadings L and K. For simplicity, we assumed L = L and K = K just to reduce the number of hyper-parameters, and L, K are tuned by conducting a grid search on the combination of typical values, i.e., for L of a 3rd-order input tensor, we might search on [4, 8], [8,12], [12,16], [16,20] ... and so on.