Tensor Completion with Side Information: A Riemannian Manifold Approach

Authors: Tengfei Zhou, Hui Qian, Zebang Shen, Chao Zhang, Congfu Xu

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments suggest that our method is more accurate than the stateof-the-art without compromising the efficiency. Empirical studies illustrate that our method achieves much more accurate solutions within comparable processing time than the state-of-the-art. We validate the effectiveness of the proposed solver CGSI by comparing it with the state-of-the-art. All the experiments are performed in Matlab on the same machine with 3.0 GHz Intel E5-2690 CPU and 128GB RAM.
Researcher Affiliation Academia Tengfei Zhou, Hui Qian , Zebang Shen, Chao Zhang, Congfu Xu College of Computer Science and Technology, Zhejiang University, China {zhoutengfei,qianhui,shenzebang,zczju,xucongfu}@zju.edu.cn
Pseudocode Yes Algorithm 1 CGSI: a Riemannian CG method
Open Source Code No The paper does not provide any specific repository link or explicit statement about the release of its source code.
Open Datasets Yes We adopt the dataset provided in [Foster et al., 2006] which contains images about eight different rural scenes taken under 33 various wavelets. In recommendation tasks, two datasets are considered: Movie Lens 10M (ML10M) and Movie Lens 20M (ML20M).
Dataset Splits Yes In this scenario, for both datasets, 80% samples are chosen as training set, and the rest are left for testing. In Fig. 3, we report another scenario, in which the percentage of training samples are varied from 10% to 70% and the rank parameter is fixed to (10, 10, 10).
Hardware Specification Yes All the experiments are performed in Matlab on the same machine with 3.0 GHz Intel E5-2690 CPU and 128GB RAM.
Software Dependencies Yes All the experiments are performed in Matlab on the same machine with 3.0 GHz Intel E5-2690 CPU and 128GB RAM.
Experiment Setup Yes All the compared methods are terminated when the training NRSE is less than 0.003 or iterate more than 300 epochs. the rank parameter is fixed to (10, 10, 10). To show the impact of parameter α on the performance of our method, we depict the relation between RMSE and α in Fig. 4, where the rank parameter is set to (10, 10, 10), and percentage of training samples is set to 80%.