Towards Better Forecasting by Fusing Near and Distant Future Visions

Authors: Jiezhu Cheng, Kaizhu Huang, Zibin Zheng3593-3600

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on three real-world datasets show that our method achieves statistically significant improvements compared to the most state-of-the-art baseline methods, with average 4.59% reduction on RMSE metric and average 6.87% reduction on MAE metric.
Researcher Affiliation Collaboration 1Sun Yat-sen University, School of Data and Computer Science, Guangzhou, China 2Sun Yat-sen University, National Engineering Research Center of Digital Life, Guangzhou, China 3Xi an Jiaotong-Liverpool University, Department of Electrical and Electronic Engineering, Suzhou, China 4Alibaba-Zhejiang University Joint Institute of Frontier Technologies, Hangzhou, China
Pseudocode No The paper describes the model architecture using mathematical equations and textual descriptions but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes All the data and experiment codes of our model are available at Github1. 1https://github.com/smallGum/MLCNN-Multivariate-Time-Series
Open Datasets Yes As depicted in Table 1, our experiments are based on three publicly available datasets: Traffic (Lai et al. 2017): This dataset consists of 48 months (2015-2016) hourly data from the California Department of Transportation... Energy (Candanedo, Feldheim, and Deramaix 2017): This UCI appliances energy dataset contains measurements of 29 different quantities... NASDAQ (Qin et al. 2017): This dataset includes the stock prices of 81 major corporations and the index value of NASDAQ 100...
Dataset Splits Yes Table 1: Dataset statistics ... Train size 60% 80% 90% Valid size 20% 10% 5% Test size 20% 10% 5%
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as CPU or GPU models, or cloud computing specifications.
Software Dependencies No The paper describes the use of CNN, LSTM, and Adam optimizer but does not specify version numbers for any software libraries, frameworks, or programming languages used in the implementation.
Experiment Setup Yes For RNN-LSTM, we vary the number of hidden state size in {10, 25, 50, 100, 200}. For MTCNN, the filter number of CNN is chosen from {5, 10, 25, 50, 100}... The dropout rate of our model is chosen from {0.2, 0.3, 0.5}. During the training phase, the batch size is 128 and the learning rate is 0.001.