Memory Augmented State Space Model for Time Series Forecasting
Authors: Yinbo Sun, Lintao Ma, Yu Liu, Shijun Wang, James Zhang, YangFei Zheng, Hu Yun, Lei Lei, Yulin Kang, Llinbao Ye
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our results demonstrate the competitiveness of forecasting performance of our proposed model comparing with other state-of-the-art SSMs. We evaluate our model for multivariate time series forecasting on five popular public datasets including electricity, solar, traffic, exchange, and wikipedia. The results are shown in Table 1 where the mean and standard errors are obtained by three independent runs. |
| Researcher Affiliation | Industry | Yinbo Sun , Lintao Ma , Yu Liu , Shijun Wang , James Zhang , Yang Fei Zheng , Hu Yun , Lei Lei , Yulin Kang and Linbao Ye Ant Group {yinbo.syb, lintao.mlt,nuoman.ly,shijun.wang,james.z,yangfei.zfy,huyun.h,jason.ll,yulin.kyl,linbao.ylb}@antgroup.com |
| Pseudocode | No | The paper describes the model architecture and inference process in detail but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code (e.g., a repository link or an explicit statement about code release for their method). |
| Open Datasets | Yes | We evaluate our model for multivariate time series forecasting on five popular public datasets including electricity, solar, traffic, exchange, and wikipedia (see [Rasul et al., 2020] for the properties and the summary of the datasets). |
| Dataset Splits | No | For the evaluation of different models, each dataset prior to the fixed forecast date is used as training data and the remaining data as the prediction data. The rolling window prediction starting from the fixed forecasting date is adopted as model evaluation method. The paper describes a train/prediction (test) split based on a fixed forecast date, but no explicit validation set or split percentages/counts for train/validation/test splits are provided. |
| Hardware Specification | No | The paper mentions time and space complexities but does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions architectural components like GRU and MLP networks, and methods like Real NVP, but does not specify any software libraries or frameworks with version numbers (e.g., "PyTorch 1.9" or "TensorFlow 2.x"). |
| Experiment Setup | No | The paper describes the generative and inference models but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings). |