Peri-midFormer: Periodic Pyramid Transformer for Time Series Analysis

Authors: Qiang Wu, Gechang Yao, Zhixi Feng, Yang Shuyuan

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our proposed Peri-mid Former demonstrates outstanding performance in five mainstream time series analysis tasks, including shortand long-term forecasting, imputation, classification, and anomaly detection. The code is available at https://github.com/Wu Qiang XDU/Peri-mid Former.
Researcher Affiliation Academia Qiang Wu Gechang Yao Zhixi Feng Shuyuan Yang Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, China {wu_qiang, yao_gechang}@stu.xidian.edu.cn, {zxfeng, syyang}@xidian.edu.cn
Pseudocode No The paper describes its methodology using textual descriptions, equations, and flowcharts (e.g., Figure 2, Figure 8) but does not include explicit pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/Wu Qiang XDU/Peri-mid Former.
Open Datasets Yes A detailed description of the dataset is given in Table 6. Table 6: Dataset descriptions. The dataset size is organized in (Train, Validation, Test). Tasks Dataset Dim Series Length Dataset Size Information (Frequency) ETTm1, ETTm2 7 {96, 192, 336, 720} (34465, 11521, 11521) Electricity (15 mins)
Dataset Splits Yes A detailed description of the dataset is given in Table 6. Table 6: Dataset descriptions. The dataset size is organized in (Train, Validation, Test).
Hardware Specification Yes All the deep learning networks are implemented in Py Torch and trained on NVIDIA 4090 24GB GPU.
Software Dependencies No All the deep learning networks are implemented in Py Torch and trained on NVIDIA 4090 24GB GPU.
Experiment Setup Yes The detailed experiment configuration is shown in Table 7. Table 7: Experiment configuration of Peri-mid Former. All the experiments use the ADAM [49] optimizer with the default hyperparameter configuration for (β1, β2) as (0.9, 0.999). Tasks / Configurations Model Hyper-parameter Training Process k Layers dmodel LR Loss Batch Size Epochs