Time-LLM: Time Series Forecasting by Reprogramming Large Language Models
Authors: Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y. Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, Qingsong Wen
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our comprehensive evaluations demonstrate that TIME-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models. Moreover, TIME-LLM excels in both few-shot and zero-shot learning scenarios. |
| Researcher Affiliation | Collaboration | 1Monash University 2Ant Group 3IBM Research 4Griffith University 5Alibaba Group 6The Hong Kong University of Science and Technology (Guangzhou) |
| Pseudocode | No | The paper describes the model structure in text and diagrams but does not provide any formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is made available at https://github.com/Kim Meen/Time-LLM. |
| Open Datasets | Yes | We evaluate on ETTh1, ETTh2, ETTm1, ETTm2, Weather, Electricity (ECL), Traffic, and ILI, which have been extensively adopted for benchmarking long-term forecasting models (Wu et al., 2023). ... Dataset statistics are summarized in Tab. 8. |
| Dataset Splits | Yes | Dataset statistics are summarized in Tab. 8. The dimension indicates the number of time series (i.e., channels), and the dataset size is organized in (training, validation, testing). |
| Hardware Specification | Yes | Our model implementation is on Py Torch (Paszke et al., 2019) with all experiments conducted on NVIDIA A100-80G GPUs. |
| Software Dependencies | No | The paper mentions 'Py Torch (Paszke et al., 2019)' but does not specify a version number for PyTorch itself, nor does it list other software components with specific version numbers. |
| Experiment Setup | Yes | The configurations of our models, relative to varied tasks and datasets, are consolidated in Tab. 9. ...Table 9: An overview of the experimental configurations for TIME-LLM. LTF and STF denote long-term and short-term forecasting, respectively. |