Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Dynamic Tensor Decomposition via Neural Diffusion-Reaction Processes

Authors: Zheng Wang, Shikai Fang, Shibo Li, Shandian Zhe

NeurIPS 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show the advantage of our approach in both simulation study and real-world applications.
Researcher Affiliation Academia Zheng Wang Kahlert School of Computing University of Utah Salt Lake City, UT 84112 EMAIL Shikai Fang Kahlert School of Computing University of Utah Salt Lake City, UT 84112 EMAIL Shibo Li Kahlert School of Computing University of Utah Salt Lake City, UT 84112 EMAIL Shandian Zhe Kahlert School of Computing University of Utah Salt Lake City, UT 84112 EMAIL
Pseudocode No The paper describes various algorithmic steps and formulations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/wzhut/Dynamic-Tensor-Decompositionvia-Neural-Diffusion-Reaction-Processes.
Open Datasets Yes Datasets. We next evaluated the predictive performance of DEMOTE in three real-world applications. (1) CA Weather (Moosavi et al., 2019) (https://smoosavi.org/datasets/lstw)... (2) CA Traffic (Moosavi et al., 2019) (https://smoosavi.org/dataset s/lstw)... (3) Server Room (https://zenodo.org/record/3610078#.Xl Np Aigza M8)...
Dataset Splits No The paper specifies a train/test split (e.g., 'randomly draw 80% observed entries and their time stamps for training, with the remaining for test') but does not explicitly mention a separate validation split or cross-validation for its own method.
Hardware Specification Yes We tested all the methods in a workstation with one NVIDIA Ge Force RTX 3090 Graphics Card, 10th Generation Intel Core i910850K Processor, 32 GB RAM, and 1 TB SSD.
Software Dependencies No The paper mentions 'Pytorch' and 'torchdiffeq library' but does not specify their version numbers for reproducibility.
Experiment Setup Yes We set the mini-batch size to 50, and used ADAM (Kingma and Ba, 2014) algorithm for stochastic optimization. The learning rate was automatically adjusted in [10 4, 10 1] by the Reduce LROn Plateau scheduler (Al-Kababji et al., 2022). The maximum number of epochs is 2K, which is enough for convergence.