Low-rank tensor completion: a Riemannian manifold preconditioning approach

Authors: Hiroyuki Kasai, Bamdev Mishra

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical comparisons suggest that our proposed algorithms robustly outperform state-of-the-art algorithms across different synthetic and real-world datasets. In the batch setting, we show a number of numerical comparisons of our proposed conjugate gradient algorithm with state-of-the-art algorithms that include TOpt (Filipovic & Jukic, 2013) and geom CG (Kressner et al., 2014), for comparisons with Tucker decomposition based algorithms, and Ha LRTC (Liu et al., 2013), Latent (Tomioka et al., 2011), and Hard (Signoretto et al., 2014) as nuclear norm minimization algorithms.
Researcher Affiliation Collaboration Hiroyuki Kasai KASAI@IS.UEC.AC.JP The University of Electro-Communications,1-5-1, Chofu-gaoka, Chofu-shi, Tokyo, 182-8585, Japan. Bamdev Mishra BAMDEVM@AMAZON.COM Amazon Development Centre India, Bengaluru 560055, Karnataka, India.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. Table 1 lists optimization ingredients but is not an algorithm.
Open Source Code Yes Our proposed algorithms are implemented in the Matlab toolbox Manopt (Boumal et al., 2014). The Matlab codes for first and second order implementations, e.g., gradient descent and trust-region methods, are available at https://bamdevmishra.com/ codes/tensorcompletion/.
Open Datasets Yes We consider the hyperspectral image Ribeira (Foster et al., 2007) discussed in (Signoretto et al., 2011; Kressner et al., 2014). Movie Lens-10M1. This dataset contains 10000054 ratings corresponding to 71567 users and 10681 movies. ... 1http://grouplens.org/datasets/movielens/.
Dataset Splits Yes This set is further randomly split into 80/10/10 train/validation/test partitions. We perform five random 80/10/10 train/validation/testpartitions.
Hardware Specification Yes All simulations are performed in Matlab on a 2.6 GHz Intel Core i7 machine with 16 GB RAM.
Software Dependencies No The paper mentions 'Matlab' and the 'Manopt' toolbox, but does not provide specific version numbers for these software components or any other ancillary software.
Experiment Setup Yes Algorithms are initialized randomly, as suggested in (Kressner et al., 2014), and are stopped when either the mean square error (MSE) on the train set Ωis below 10 12 or the number of iterations exceeds 250. A popular formula for the step-size γk at k-th update is γk = γ0/(1 + γ0λk), where γ0 is the initial step-size and λ is a fixed reduction factor. Following (Bottou, 2012), we select γ0 in the pre-training phase using a small sample size of a training set. λ is fixed to 10 7. γ0 is selected from the step-size list {8, 9, 10, 11, 12} in the pre-training phase. 10% entries are randomly observed. The pre-training uses 10% frontal slices of all the slices. The maximum number of outer loops is set to 100.