Undirected Probabilistic Model for Tensor Decomposition

Authors: Zerui Tao, Toshihisa Tanaka, Qibin Zhao

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments are conducted on synthetic and real-world datasets to showcase the advantages of our model. Through simulation studies, we demonstrate the capability of our model to handle data generated from diverse distributions, in contrast to traditional Gaussian-based models that yield unfaithful and biased estimates. Subsequently, experiments are performed on multiple real-world datasets to evaluate sparse and continuous-time tensor completion. Our model outperforms various baselines across multiple metrics and settings, highlighting the generality of the proposed model.
Researcher Affiliation Academia 1Tokyo University of Agriculture and Technology 2RIKEN AIP
Pseudocode No The paper does not contain any clearly labeled "Pseudocode" or "Algorithm" blocks.
Open Source Code Yes The code is available at https://github.com/taozerui/energy_td
Open Datasets Yes We test our model on two sparsely observed tensors: (1) Alog, a file access log dataset [52]...; (2) ACC, a three-way tensor generated from a code repository management system [52]... We evaluate our model on two continuous-time tensor datasets: (1) Air, the Beijing air quality dataset [47]...; (2) Click, an ads click dataset [43]...
Dataset Splits Yes We use the same dataset split as in [52] and report the 5-fold cross validation results. We use the same dataset split as in [43] and report the 5-fold cross validation results.
Hardware Specification Yes All the experiments are conducted on a Linux workstation with Intel Xeon Silver 4316 CPU, 256GB RAM and NVIDIA RTX A5000 GPUs (24GB memory each).
Software Dependencies Yes The code is implemented based on Py Torch 1.12.1 [30].
Experiment Setup Yes We set batch size 1000 and run 1000 epochs for Alog, 100 epochs for ACC. For our model, we use Adam [17] optimizer. Learning rates of all models are chosen from {1e 2, 1e 3, 1e 4}.