Nonparametric Decomposition of Sparse Tensors

Authors: Conor Tillinghast, Shandian Zhe

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For evaluation, we first tested tensor sampling. NEST indeed generates increasingly sparse entries along with the growth of tensor size, while the existing (exchangeable) models generates dense data. We also showcase the generated sparse tensors by NEST. We then evaluated NEST in three real-world applications. In predicting both entry values and entry indices (i.e., link prediction), NEST outperforms the state-of-the-art multilinear and nonparametric decomposition models, often significantly.
Researcher Affiliation Academia Conor Tillinghast 1 Shandian Zhe 1 1School of Computing, University of Utah. Correspondence to: Shandian Zhe <zhe@cs.utah.edu>.
Pseudocode No The paper describes the algorithm in text form within Section 4 'Algorithm' but does not present any formal pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes (1) Alog (Zhe et al., 2016b), a three-mode tensor of size 200 100 200. (2) Movie Lens (https://grouplens.org/datasets/ movielens/100k/), a three-mode tensor about movie ratings. (3) SG (Li et al., 2015), a three-mode tensor extracted from data in Foursquare in Singapore.
Dataset Splits Yes For each dataset, we randomly split the existent entries into 80% for training and the remaining 20% for test.
Hardware Specification No The paper does not explicitly describe any specific hardware components (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper mentions 'Tensor Flow (Abadi et al., 2016)' as the implementation framework but does not provide a specific version number for TensorFlow or any other software dependencies.
Experiment Setup Yes The learning rate was chosen from {10 4, 2 10 4, 5 10 4, 10 3, 5 10 3, 10 2}. The mini-batch size was set to 200 for Alog and Movie Lens, and 512 for SG. To ensure convergence, we ran for 700 epochs on Alog, for 300 epochs on Movie Lens and for 500 epochs on SG using ADAM (Kingma and Ba, 2014). We varied the dimension of the embeddings, i.e., rank, from {3, 5, 7, 9}.