Dynamic Tensor Decomposition via Neural Diffusion-Reaction Processes

Authors: Zheng Wang, Shikai Fang, Shibo Li, Shandian Zhe

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show the advantage of our approach in both simulation study and real-world applications.
Researcher Affiliation Academia Zheng Wang Kahlert School of Computing University of Utah Salt Lake City, UT 84112 u1208847@utah.edu Shikai Fang Kahlert School of Computing University of Utah Salt Lake City, UT 84112 shikai.fang@utah.edu Shibo Li Kahlert School of Computing University of Utah Salt Lake City, UT 84112 shibo@cs.utah.edu Shandian Zhe Kahlert School of Computing University of Utah Salt Lake City, UT 84112 zhe@cs.utah.edu
Pseudocode No The paper describes various algorithmic steps and formulations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/wzhut/Dynamic-Tensor-Decompositionvia-Neural-Diffusion-Reaction-Processes.
Open Datasets Yes Datasets. We next evaluated the predictive performance of DEMOTE in three real-world applications. (1) CA Weather (Moosavi et al., 2019) (https://smoosavi.org/datasets/lstw)... (2) CA Traffic (Moosavi et al., 2019) (https://smoosavi.org/dataset s/lstw)... (3) Server Room (https://zenodo.org/record/3610078#.Xl Np Aigza M8)...
Dataset Splits No The paper specifies a train/test split (e.g., 'randomly draw 80% observed entries and their time stamps for training, with the remaining for test') but does not explicitly mention a separate validation split or cross-validation for its own method.
Hardware Specification Yes We tested all the methods in a workstation with one NVIDIA Ge Force RTX 3090 Graphics Card, 10th Generation Intel Core i910850K Processor, 32 GB RAM, and 1 TB SSD.
Software Dependencies No The paper mentions 'Pytorch' and 'torchdiffeq library' but does not specify their version numbers for reproducibility.
Experiment Setup Yes We set the mini-batch size to 50, and used ADAM (Kingma and Ba, 2014) algorithm for stochastic optimization. The learning rate was automatically adjusted in [10 4, 10 1] by the Reduce LROn Plateau scheduler (Al-Kababji et al., 2022). The maximum number of epochs is 2K, which is enough for convergence.