Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
FSTLLM: Spatio-Temporal LLM for Few Shot Time Series Forecasting
Authors: Yue Jiang, Yile Chen, Xiucheng Li, Qin Chao, Shuai Liu, Gao Cong
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on real-world datasets demonstrate the adaptability and consistently superior performance of FSTLLM over major baseline models by a significant margin. |
| Researcher Affiliation | Collaboration | 1College of Computing and Data Science, Nanyang Technological University, Singapore 2DAMO Academy, Alibaba group, Singapore 3School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), China. |
| Pseudocode | No | The paper describes the methodology using mathematical equations and textual descriptions in sections 3.1, 3.2, and 3.3, but it does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | Our code is available at: https://github.com/ JIANGYUE61610306/FSTLLM. |
| Open Datasets | Yes | The Nottingham dataset contains parking lot availability data from 19 car parks in Nottingham, recorded at 15-minute intervals. We crawled this dataset from the official Tram Link Nottingham website 1 https://www.thetram.net/park-and-ride. The ECL dataset is a subset of the Electricity dataset (Li et al., 2019), comprising hourly electricity consumption (measured in kilowatt-hours) from 19 clients. |
| Dataset Splits | Yes | Both datasets are partitioned into training, validation, and testing sets with a split ratio of 70%/10%/20%. |
| Hardware Specification | Yes | FSTLLM is fine-tuned on the training examples for 2 epochs on a Linux workstation with an Intel(R) Core(TM) i7-13700K CPU @ 5.40GHz and a NVIDIA A6000 GPU. |
| Software Dependencies | No | The paper mentions using 4-bit quantization, LoRA, Adam optimizer, and LLaMA-2-7B model. However, it does not provide specific version numbers for software dependencies such as programming languages, libraries, or frameworks (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | We set Lo RA attention dimension to be 64 and an initial learning rate to be 2e-4 with Adam optimizer. We set α to be 2.0 in the α-Entmax function and the depth of the graph diffusion convolution S is set to 3. The hidden size of GRUs is set to 64. ... We set the sequence length and prediction horizon to 12 for the ECL dataset... For the Nottingham dataset, ... we set the sequence length and prediction horizon to 8 (equivalent to 120 minutes)... |