Boosted Sparse and Low-Rank Tensor Regression
Authors: Lifang He, Kun Chen, Wanwan Xu, Jiayu Zhou, Fei Wang
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The superior performance of our approach is demonstrated on various real-world and synthetic examples. We evaluate the effectiveness and efficiency of our method SURF through numerical experiments on both synthetic and real data, and compare with various state-of-the-art regression methods, including LASSO, Elastic Net (ENet), Regularized multilinear regression and selection (Remurs) [Song and Lu, 2017], optimal CP-rank Tensor Ridge Regression (or TRR) [Guo et al., 2012], Generalized Linear Tensor Regression Model (GLTRM) [Zhou et al., 2013], and a variant of our method with Alternating Convex Search (ACS) estimation. |
| Researcher Affiliation | Academia | Lifang He Weill Cornell Medicine lifanghescut@gmail.com Kun Chen University of Connecticut kun.chen@uconn.edu Wanwan Xu University of Connecticut wanwan.xu@uconn.edu Jiayu Zhou Michigan State Universtiy dearjiayu@gmail.com Fei Wang Weill Cornell Medicine few2001@med.cornell.edu |
| Pseudocode | Yes | Algorithm 1 Fast Stagewise Unit-Rank Tensor Factorization (SURF) |
| Open Source Code | Yes | Our code is available at https://github.com/Lifang He/SURF. |
| Open Datasets | Yes | Data used in the preparation of this article were obtained from the Parkinson s Progression Markers Initiative (PPMI) database (http://www.ppmi-info.org/data). |
| Dataset Splits | Yes | We follow [Kampa et al., 2014] to arrange the test and training sets in the ratio of 1:5. The hyperparameters of all methods are optimized using 5-fold cross validation on the training set |
| Hardware Specification | Yes | All methods are implemented in MATLAB and executed on a machine with 3.50GHz CPU and 256GB RAM. |
| Software Dependencies | No | The paper states that methods are 'implemented in MATLAB' and that 'for LASSO and ENet we use the MATLAB package glmnet' and 'for GLTRM... based on Tensor Reg toolbox,' but it does not provide specific version numbers for MATLAB or any of the mentioned packages/toolboxes. |
| Experiment Setup | Yes | The hyperparameters of all methods are optimized using 5-fold cross validation on the training set, with range α {0.1, 0.2, , 1}, λ {10 3, 5 10 3, 10 2, 5 10 2, , 5 102, 103}, and R {1, 2, , 50}. Specifically, for GLTRM, ACS, and SURF, we simply set α = 1. For LASSO, ENet and ACS, we generate a sequence of 100 values for λ to cover the whole path. For fairness, the number of iterations for all compared methods are fixed to 100. |