AutoSTL: Automated Spatio-Temporal Multi-Task Learning

Authors: Zijian Zhang, Xiangyu Zhao, Hao Miao, Chunxu Zhang, Hongwei Zhao, Junbo Zhang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark datasets verified that our model achieves state-of-the-art performance.
Researcher Affiliation Collaboration 1 College of Computer Science and Technology, Jilin University, China 2 School of Data Science, City University of Hong Kong, Hong Kong 3 Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, China 4 Department of Computer Science, Aalborg University, Denmark 5 JD Intelligent Cities Research, China 6 JD i City, JD Technology, China 7 Hong Kong Institute for Data Science, City University of Hong Kong, Hong Kong
Pseudocode No The paper describes its methodology using textual descriptions and mathematical formulas but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a direct link to a source code repository or an explicit statement confirming the release of their code.
Open Datasets Yes We evaluate Auto STL on two commonly used real-world benchmark datasets of spatio-temporal prediction, i.e., NYC Taxi1 and PEMSD42. 1https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page 2http://pems.dot.ca.gov/
Dataset Splits No The paper mentions using 'validation data' for optimization, but it does not specify the exact percentages or counts for training, validation, and test splits.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU models, CPU types, or memory) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the implementation.
Experiment Setup Yes We predict the traffic attribute of the future 1 time interval based on the historical 12 time steps, i.e., |T| = 12. In terms of model structure, we assign 1 task-specific module and 1 shared module in each hidden layer, e.g., 3 modules in one hidden layer for two-tasks learning. We stack 3 hidden layers in total. We test hidden size in {16, 32, 64, 128}.