STanHop: Sparse Tandem Hopfield Model for Memory-Enhanced Time Series Prediction
Authors: Dennis Wu, Jerry Yao-Chieh Hu, Weijian Li, Bo-Yu Chen, Han Liu
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we validate the efficacy of STan Hop-Net on many settings: time series prediction, fast test-time adaptation, and strongly correlated time series prediction. |
| Researcher Affiliation | Academia | Department of Computer Science, Northwestern University, Evanston, IL 60208, USA Department of Physics, National Taiwan University, Taipei 10617, Taiwan Department of Statistics and Data Science, Northwestern University, Evanston, IL 60208, USA |
| Pseudocode | Yes | Algorithm 1 Multi-Step Generalized Sparse Hopfield Update for GSH |
| Open Source Code | Yes | Code is available at Git Hub; full version and future updates are on ar Xiv. |
| Open Datasets | Yes | Data. Following (Zhang and Yan, 2023; Zhou et al., 2022; Wu et al., 2021), we use 6 realistic datasets: ETTh1 (Electricity Transformer Temperature-hourly), ETTm1 (Electricity Transformer Temperature-minutely), WTH (Weather), ECL (Electricity Consuming Load), ILI (Influenza-Like Illness), Traffic. [...] Table 8: Dataset Sources |
| Dataset Splits | Yes | The first four datasets are split into train/val/test ratios of 14/5/5, and the last two are split into 7/1/2. |
| Hardware Specification | No | This research was supported in part through the computational resources and staff contributions provided for the Quest high performance computing facility at Northwestern University which is jointly supported by the Office of the Provost, the Office for Research, and Northwestern University Information Technology. |
| Software Dependencies | No | The paper mentions software like 'Adam optimizer' and 'Weights and Biases (Biewald et al., 2020)' but does not provide specific version numbers for these or other key software dependencies. |
| Experiment Setup | Yes | For hyperparameter search, for each dataset, we conduct hyperparameter optimization using the Sweep feature of Weights and Biases (Biewald et al., 2020), with 200 iterations of random search for each setting to identify the optimal model configuration. The search space for all hyperparameters are reported in Table 9. |