AutoST: Towards the Universal Modeling of Spatio-temporal Sequences

Authors: Jianxin Li, Shuai Zhang, Hui Xiong, Haoyi Zhou

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on five real-world datasets demonstrate that Uni ST with any single type of our three proposed modules can achieve state-of-the-art performance. Furthermore, Auto ST can achieve overwhelming performance with Uni ST. This section empirically evaluates the effectiveness of Uni ST and Auto ST models with short-term, medium-term, and long-term ST sequence forecasting tasks on five real-world datasets.
Researcher Affiliation Academia BDBC Beihang University Beijing, China 100191 lijx@buaa.edu.cn Shuai Zhang BDBC Beihang University Beijing, China 100191 zhangs@act.buaa.edu.cn Hui Xiong HKUST(GZ) HKUST FYTRI Guangzhou, China 511455 xionghui@ust.hk BDBC Beihang University Beijing, China 100191 haoyi@buaa.edu.cn
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks with explicit labels such as 'Pseudocode' or 'Algorithm'.
Open Source Code Yes The code is available at https://github.com/shuaibuaa/autost2022. The code, data and instructions to reproduce the experimental results are provided in the supplemental material.
Open Datasets Yes METR-LA [8]: The traffic speed dataset contains 4 months of data from March 1, 2012 to June 30, 2012, recorded by sensors at 207 different locations on highways in Los Angeles County, USA. PEMS-BAY [14]: The traffic speed dataset comes from the California Transportation Agencies (Cal Trans) Performance Evaluation System (Pe MS). PEMS-03/04/08 [3]: The three traffic datasets are also from the Pe MS system of the California Transportation Agency...
Dataset Splits Yes Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] These details are in the Appendix Section B. The objective function is: min Lval (w ( ), ) s.t. w ( ) = argminw Ltrain(w, ), where is the architecture, and w is the model weights.
Hardware Specification Yes Platform: Intel(R) Xeon(R) CPU 2.40GHz 2 + NVIDIA Tesla V100 GPU (32 GB) 4.
Software Dependencies No The paper mentions 'Mind Spore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor' as tools used for the research, but it does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] These details are in the Appendix Section B.