Physics-Informed Long-Sequence Forecasting From Multi-Resolution Spatiotemporal Data
Authors: Chuizheng Meng, Hao Niu, Guillaume Habault, Roberto Legaspi, Shinya Wada, Chihiro Ono, Yan Liu
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that our proposed approach achieves the best performance on the long-sequence forecasting tasks compared to baselines without a specific design for multiresolution data. and 4 Experiments Datasets We evaluate the performance of ST-KMRN and all baselines on 3 datasets: |
| Researcher Affiliation | Collaboration | 1University of Southern California 2KDDI Research, Inc. |
| Pseudocode | No | The paper describes the methodology using prose and diagrams (Figure 2) but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | We include a more comprehensive summary of related works in the appendix1. 1https://github.com/mengcz13/mengcz13.github.io/raw/master/ pdf/ijcai2022-appendix.pdf. This link points to an appendix PDF, not to the source code for the methodology described in the paper. |
| Open Datasets | Yes | Datasets We evaluate the performance of ST-KMRN and all baselines on 3 datasets: (1) New York Yellow Taxi Trip Record Data (Yellow Cab) [NYCTLC, 2021] in 2017-2019; (2) New York Green Taxi Trip Record Data (Green Cab) [NYCTLC, 2021] in 2017-2019; and (3) Solar Energy Data (Solar Energy) [NREL, 2021] of Alabama in 2006. |
| Dataset Splits | Yes | We use sliding windows to generate input/output sequence pairs ordered by starting time and divide all pairs into train/validation/test sets with the ratio 60%/20%/20%. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments. It only mentions 'high computation and memory complexity' in relation to some baselines. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers, such as programming languages, libraries, or frameworks used for implementation. |
| Experiment Setup | No | The paper describes the datasets, baselines, and evaluation setup (e.g., sliding windows, train/validation/test splits) but does not provide specific details on hyperparameters such as learning rate, batch size, or optimizer settings for training the models. |