Feature Programming for Multivariate Time Series Prediction
Authors: Alex Daniel Reneau, Jerry Yao-Chieh Hu, Ammar Gilani, Han Liu
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerically, we validate the efficacy of our method on several synthetic and real-world noisy time series datasets. |
| Researcher Affiliation | Academia | 1Department of Computer Science, University of Northwestern, Evanston, USA 2Department of Statistics and Data Science, University of Northwestern, Evanston, USA. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at Git Hub; the most updated version is available on ar Xiv with a full list of authors, including Chenwei Xu and Weijian Li. We kindly request that citations refer to the ar Xiv version: https://arxiv.org/abs/2306.06252. |
| Open Datasets | Yes | The data utilized in our experiments consists of a synthetic dataset constructed to adhere to the assumptions of our method, as well as an electricity dataset, a traffic dataset, and a taxi dataset. ... Taxi Dataset: We use the TLC Trip Record Dataset... Electricity Dataset: We use the UCI Electricity Load Diagrams Dataset... Traffic Dataset: We use the UCI PEM-SF Traffic Dataset... |
| Dataset Splits | No | Each of these datasets is partitioned in an 80/20 ratio to derive our training data (known as insample data) and testing data (referred to as out-of-sample data). |
| Hardware Specification | Yes | Platforms: The GPUs and CPUs used to conduct experiments are NVIDIA GEFORCE RTX 2080 Ti and INTEL XEON SILVER 4214 @ 2.20GHz. |
| Software Dependencies | No | The paper mentions software like 'DART' and 'XGBoost', 'Light GBM', 'Transformer', 'TFT', 'TCN', 'N-BEATS' but does not specify their version numbers. |
| Experiment Setup | Yes | Hyperparameter optimization is conducted via random search for 100 iterations. learning rate: 0.01, 0.001, 0.0001, 0.00001 batch size: 64, 128, 256, 512, feature dim hidden size: 64, 128, 512, 1024, 2048 num epochs: we use early stopping. ... We use an Adam optimizer with learning rate lr = 10 5 for training. The coefficients of Adam optimizer, betas, are set to (0.9, 0.999). |