Distributed Flexible Nonlinear Tensor Factorization
Authors: Shandian Zhe, Kai Zhang, Pengyuan Wang, Kuang-chih Lee, Zenglin Xu, Yuan Qi, Zoubin Ghahramani
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate the advantages of our method over several state-of-the-art approaches, in terms of both predictive performance and computational efficiency. |
| Researcher Affiliation | Collaboration | Dept. Computer Science, Purdue University, NEC Laboratories America, Princeton NJ, Dept. Marketing, University of Georgia at Athens, Yahoo! Research, Big Data Res. Center, School Comp. Sci. Eng., Univ. of Electr. Sci. & Tech. of China, Ant Financial Service Group, Alibaba, University of Cambridge |
| Pseudocode | No | The paper describes procedures and derivations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not explicitly state that the source code for its methodology is available or provide a link to a code repository. |
| Open Datasets | Yes | ACC, A real-valued tensor describing three-way interactions (user, action, resource) in a code repository management system [23]. DBLP: a binary tensor depicting a three-way bibliography relationship (author, conference, keyword) [23]. NELL: a binary tensor representing the knowledge predicates, in the form of (entity, entity, relationship) [22]. |
| Dataset Splits | Yes | All the methods were evaluated via a 5-fold cross validation. The nonzero entries were randomly split into 5 folds; 4 folds were used for training and the remaining non-zero entries and 0.1% zero entries were used for testing so that the number of non-zero entries is comparable to the number of zero entries. |
| Hardware Specification | No | The paper vaguely mentions running on 'a large YARN cluster' or 'a single computer' but does not provide specific hardware details such as GPU/CPU models, memory, or other detailed specifications. |
| Software Dependencies | No | The paper states 'Our model was implemented on SPARK' but does not specify the version number of SPARK or any other software dependencies. |
| Experiment Setup | Yes | For our model, the number of inducing points was set to 100, and we used a balanced training set... Our model used ARD kernel and the kernel parameters were estimated jointly with the latent factors. We implemented our distributed inference algorithm with two optimization frameworks, gradient descent and L-BFGS... The number of latent factors was set to 3... We set 50 MAPPERS for Giga Tensor, Din Tucker and our model. |