MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data
Authors: Zhibo Zhu, Ziqi Liu, Ge Jin, Zhiqiang Zhang, Lei Chen, Jun Zhou, Jianyong Zhou
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both synthetic and real-world data show the superiority of our approach. |
| Researcher Affiliation | Industry | Zhibo Zhu Ant Group gavin.zzb@antgroup.com Ziqi Liu Ant Group ziqiliu@antgroup.com Ge Jin Ant Group elvis.jg@antgroup.com Zhiqiang Zhang Ant Group lingyao.zzq@antgroup.com Lei Chen Ant Group qingli.cl@antgroup.com Jun Zhou Ant Group jun.zhoujun@antgroup.com Jianyong Zhou Ant Group neil.zjy@antgroup.com |
| Pseudocode | No | The paper describes the model and algorithms in prose and mathematical equations but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See Section 5 and supplementary. |
| Open Datasets | Yes | We report results on three real-world datasets, including Rossmann, M5 and Wiki [35]. ... To generate data from ARMA, we use ARMA(2, 0)... To generate data from Deep AR, we use the Deep AR model... We train a base model on the real-world Wiki dataset [35] |
| Dataset Splits | Yes | We set the length of time series as 360, and use rolling window approach for training and validating our results in the last 120 time steps (i.e., at each time step, we train the model using the time series before current time point, and validate using the following 30 values). ... The data of last two months in train interval are used as validation data to find the optimal model. |
| Hardware Specification | No | The paper states: 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See supplementary.' However, no specific hardware details (like GPU models or CPU types) are provided within the main body of the paper. |
| Software Dependencies | No | The paper mentions using 'Python packages' for ARMA and Prophet, but no specific versions for these or any other software components are provided. |
| Experiment Setup | Yes | We do grid search for the following hyperparameters in clustering and forecasting algorithms, i.e., the number of clusters {3, 5, 7}, the learning rate {0.001, 0.0001}, the penalty weight on the ℓ2norm regularizers {1e 5, 5e 5}, and the dropout rate {0, 0.1}. ... we set batch size as 128, and the number of training epochs as 300 for Rossmann, 50 for M5 and 20 for Wiki. |