Sequential Latent Variable Models for Few-Shot High-Dimensional Time-Series Forecasting
Authors: Xiajun Jiang, Ryan Missel, Zhiyuan Li, Linwei Wang
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compared the presented framework with a comprehensive set of baseline models 1) trained globally on the large meta-training set with diverse dynamics, 2) trained individually on single dynamics with and without fine-tuning to k-shot support series, and 3) extended to few-shot meta-formulations. We demonstrated that the presented framework is agnostic to the latent dynamic function of choice and, at meta-test time, is able to forecast for new dynamics given variable-shot of support series. |
| Researcher Affiliation | Academia | Golisano College of Computing and Information Sciences Rochester Institute of Technology Rochester, NY 14623, USA {xj7056,rxm7244,zl7904,Linwei.Wang}@rit.edu |
| Pseudocode | No | The paper describes the methodology in prose and mathematical equations but does not include any explicit pseudocode blocks or algorithm figures. |
| Open Source Code | Yes | Source code available at https://github.com/john-x-jiang/meta_ssm. |
| Open Datasets | Yes | Data: We first considered benchmark images generated with controllable physics, including bouncing ball Fraccaro et al. (2017), Hamiltonian pendulum (Botev et al., 2021), and Hamiltonian mass-spring systems (Botev et al., 2021). Details of data generation are available in Appendix G... All ball data can be found here: https://drive.google.com/drive/ folders/1Tm3DNrugc Sb WXSNye GL3j QKR8y3i Xx0m?usp=sharing. The heart data can be found here: https://drive.google.com/drive/folders/ 12S579V0KWMgb HGXDQZt0r Qyfz F1Ay NCu?usp=sharing. |
| Dataset Splits | Yes | For gravity-16 data, we used 10 gravity in meta-training, 2 in meta-validation, and 4 in meta-testing. |
| Hardware Specification | Yes | All experiments were run on NVIDIA Tesla T4s with 16 GB memory. |
| Software Dependencies | No | The paper specifies the optimizer (Adam) and neural network architectures (e.g., GRU-res, NODE, RGN-res) but does not provide specific version numbers for software dependencies like Python, PyTorch, TensorFlow, or CUDA. |
| Experiment Setup | Yes | Detailed hyperparameter settings are shown below. Meta Model Architecture on Mixed-Physics and Gravity-16: Domain Input: 20 observation timesteps of 32 32 dimensions, Initialization Input: 3 observation timesteps of 32 32 dimensions, Optimizer: Adam, 5 10 4 learning rate, Batch size: 50, Number of epochs: 200, Latent Units: 8, Transition Units: 100, Domain Encoder Filters: [8, 16, 8], Domain Time Units: [10, 5, 1], Initial Encoder Filters: [8, 16, 8], Emission Filters: [32, 16, 8, 1], KL term initialization: λ1 = 10 2, KL term set-embedding: λ2 = 10 2. |