Bayesian Continuous-Time Tucker Decomposition

Authors: Shikai Fang, Akil Narayan, Robert Kirby, Shandian Zhe

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For evaluation, we examined our approach in both ablation study and real-world applications. On synthetic datasets, BCTT successful learned different temporal dynamics and recovered the clustering structures of the tensor nodes from their factor estimation. On three real-world temporal tensor datasets, BCTT significantly outperforms the competing dynamic decomposition methods, including discrete time factors and continuous time coefficients, often by a large margin.
Researcher Affiliation Academia 1School of Computing, University of Utah 2 Scientific Computing and Imaging (SCI) Institute, University of Utah 3Department of Mathematics, University of Utah. Correspondence to: Shandian Zhe <zhe@cs.utah.edu>.
Pseudocode Yes Algorithm 1 BCTT
Open Source Code No The paper mentions implementing our method BCTT with PyTorch (Paszke et al., 2019) but does not provide a link or explicit statement about releasing the source code for their method.
Open Datasets Yes (1) Movie Len100K (https://grouplens.org/datasets/movielens/)... (2) Ads Click (https://www.kaggle.com/ c/avazu-ctr-prediction)... (3) DBLP (https://dblp.uni-trier.de/ xml/)...
Dataset Splits Yes we randomly sampled 80% observed entry values and their time points for training, and then tested on the remaining entries.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models or cloud instance specifications used for running experiments.
Software Dependencies No The paper mentions 'PyTorch' but does not specify its version number or any other software dependencies with explicit version numbers.
Experiment Setup Yes We used the Mat ern kernel with ν = 3/2, and set l = σ2 = 0.1. We ran our message-passing inference until convergence. The tolerance level was set to 10 3. ... For CT-CP, we used 100 knots for the polynomial splines. Except BCTT, all the methods were trained with stochastic mini-batch optimization, with mini-batch size 100. We used ADAM optimization (Kingma and Ba, 2014). The learning rate was chosen from {10 4, 5 10 4, 10 3, 5 10 3, 10 2}.