Expected Tensor Decomposition with Stochastic Gradient Descent

Authors: Takanori Maehara, Kohei Hayashi, Ken-ichi Kawarabayashi

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results confirm that our algorithms significantly outperform all existing methods in terms of accuracy. We also show that they can successfully decompose a large tensor, containing billion-scale nonzero elements. 6 Experiments Throughout the experiments, the regularization parameter was fixed as ρ = 0.0001. All experiments were conducted using an Intel Xeon E5-2690 2.90GHz CPU with 256GB memory and Ubuntu 12.04.
Researcher Affiliation Academia Takanori Maehara1,3 Kohei Hayashi2,3 Ken-ichi Kawarabayashi2,3 1) Shizuoka University, Shizuoka, Japan 2) National Institute of Informatics, Tokyo, Japan 3) JST, ERATO, Kawarabayashi Large Graph Project
Pseudocode No The paper describes algorithms using mathematical equations but does not include pseudocode or an algorithm block.
Open Source Code No The paper does not provide concrete access to source code or explicitly state its availability.
Open Datasets Yes We employed the Amazon review dataset4 (Mc Auley and Leskovec 2013), which contains 34 million user reviews. http://snap.stanford.edu/data/web-Amazon.html
Dataset Splits No The paper describes the datasets used (Amazon review dataset) and their sizes, but does not provide specific details on training, validation, or testing splits.
Hardware Specification Yes All experiments were conducted using an Intel Xeon E5-2690 2.90GHz CPU with 256GB memory and Ubuntu 12.04.
Software Dependencies Yes Our algorithm was implemented in C++ and compiled using g++v4.6 with -O3 option.
Experiment Setup Yes Throughout the experiments, the regularization parameter was fixed as ρ = 0.0001. For Figure 1 (a), we used the fixed step size rule η(t) = 1/(1 + t) and for Figure 1 (b), we used η(t) = λ0/(t0 + t), where parameters λ0 and t0 were optimized by a grid search. For efficient computation, we used the mini-batch method, i.e., each sample was a sum of the tensors of 1000 reviews.