Towards Transparent Time Series Forecasting
Authors: Krzysztof Kacprzyk, Tennison Liu, Mihaela van der Schaar
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments were conducted on four real-world datasets (Airfoil (Brooks et al., 1989), flchain (Dispenzieri et al., 2012), Stress-Strain (Aakash et al., 2019), and Tacrolimus (Woillard et al., 2011)) and three synthetic ones (Sine, Beta, and Tumor, the latter based on a model from (Wilkerson et al., 2017)). The synthetic datasets are constructed to contain trajectories exhibiting many different trends. Figure 1, Figure 4, Figure 1 show TIMEVIEW fitted to Sine, Beta, and Tumor datasets. As shown in Table 3, TIMEVIEW outperforms the transparent methods and closed-form expression on most datasets and achieves comparable performance to the black boxes. |
| Researcher Affiliation | Academia | Krzysztof Kacprzyk University of Cambridge kk751@cam.ac.ukTennison Liu University of Cambridge tl522@cam.ac.ukMihaela van der Schaar University of Cambridge The Alan Turing Institute mv472@cam.ac.uk |
| Pseudocode | Yes | Pseudocode. The pseudocode of the model training in TIMEVIEW is shown in Algorithm 1. The pseudocode of composition extraction implemented in TIMEVIEW is shown in Algorithm 2. |
| Open Source Code | Yes | The code to reproduce the results and for the visualization tool can be found at https://github.com/krzysztof-kacprzyk/TIMEVIEW and at the wider lab repository https://github.com/vanderschaarlab/TIMEVIEW. |
| Open Datasets | Yes | Experiments were conducted on four real-world datasets (Airfoil (Brooks et al., 1989), flchain (Dispenzieri et al., 2012), Stress-Strain (Aakash et al., 2019), and Tacrolimus (Woillard et al., 2011)) and three synthetic ones (Sine, Beta, and Tumor, the latter based on a model from (Wilkerson et al., 2017)). |
| Dataset Splits | Yes | All datasets are split into training, validation, and testing sets with ratios (0.7 : 0.15 : 0.15). |
| Hardware Specification | Yes | The experiments were performed on 12th Gen Intel(R) Core i7-12700H with 64 GB of RAM and NVIDIA Ge Force RTX 3050 Ti Laptop GPU as well as on the 10th Gen Intel Core i9-10980XE with 60 GB of RAM and NVIDIA RTX A4000. |
| Software Dependencies | No | The paper mentions various software packages used (e.g., scipy, scikit-learn, pytorch, py-xgboost, catboost, lightgbm) and their licenses, but does not specify their exact version numbers required for reproduction. |
| Experiment Setup | Yes | We perform hyperparameter tuning using Optuna (Akiba et al., 2019) and run it for 100 trials. We describe the hyperparameters we tune and their ranges in Table 6. |