Multi-version Tensor Completion for Time-delayed Spatio-temporal Data

Authors: Cheng Qian, Nikos Kargas, Cao Xiao, Lucas Glass, Nicholas Sidiropoulos, Jimeng Sun

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the MTC s performance in terms of prediction accuracy and scalability. We obtain up to 27.2% lower root mean-squared-error compared to the best baseline method.
Researcher Affiliation Collaboration 1Analytics Center of Excellence, IQVIA 2Department of Electrical and Computer Engineering, University of Minnesota Twin Cities 3Department of Electrical and Computer Engineering, University of Virginia 4Department of Computer Science, University of Illinois Urbana-Champaign
Pseudocode Yes The detailed steps of MTC are summarized in Algorithm 1 in the supplementary material.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes We consider the following datasets for evaluation. Chicago-Crime dataset [Smith et al., 2017] includes 5,884,473 crime reports in Chicago, ranging from 2001 to 2017. COVID-19 dataset [Dong et al., 2020] summarizes the COVID-19-19 daily reports from Johns Hopkins University.
Dataset Splits Yes We consider both static and dynamic cases. We select the first S GDs from each dataset for the static case while setting the remaining for the dynamic case. Here, S is set to 421, 168, and 46 for Chicago Crime, COVID-19, and Patient-Claims, respectively.
Hardware Specification Yes All methods were trained on 2.6 GHz 6-Core Intel Core i7, 16 GB Memory, and 256 GB SSD.
Software Dependencies No The paper mentions using Tensorlab and other models like ARIMA, LSTM, and COSTCO, but does not specify exact version numbers for any software dependencies.
Experiment Setup No The paper describes the experimental setup in terms of data usage (static vs. dynamic cases) and baselines, and states 'The implementation details are provided in the appendix.' However, the provided text does not contain specific hyperparameters such as learning rate, batch size, or optimizer settings in the main body.