Graph-based Time Series Clustering for End-to-End Hierarchical Forecasting

Authors: Andrea Cini, Danilo Mandic, Cesare Alippi

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Hi GP is extensively validated on relevant benchmarks (Sec. 5). Besides achieving state-of-the-art forecasting accuracy, we show that our approach can be used as a self-supervised architecture to learn meaningful cluster assignments. Section 5. Experiments: Hi GP is validated over several settings considering forecasting benchmarks with no predefined hierarchical structure. In particular, we focus on validating of the proposed end-to-end clustering and forecasting architecture against relevant baselines and state-of-the-art architectures.
Researcher Affiliation Academia 1The Swiss AI Lab IDSIA, Universit a della Svizzera italiana, Switzerland 2Imperial College London, United Kingdom 3Politecnico di Milano, Italy.
Pseudocode No The paper describes the model architecture and equations (e.g., Eq. 1-7) but does not provide pseudocode or a clearly labeled algorithm block.
Open Source Code Yes The code for reproducing the computational experiments is available online2. Footnote 2: https://github.com/andreacini/higp
Open Datasets Yes All of the above datasets are either openly available (Metr-LA, Pe MS-Bay, AQI) or obtainable free of charge for research purposes (CER-E3). Footnote 3: https://www.ucd.ie/issda/data/commissionforenergyregulationcer/
Dataset Splits Yes Training, validation, and testing data are respectively obtained with a 70%/10%/20% sequential split.
Hardware Specification Yes Experiments were run on a server equipped with AMD EPYC 7513 CPUs and NVIDIA RTX A5000 GPUs.
Software Dependencies No The paper lists software libraries like "Python (Van Rossum and Drake, 2009)", "Py Torch (Paszke et al., 2019)", "Py Torch Lightning (Falcon and The Py Torch Lightning team, 2019)", "Py Torch Geometric (Fey and Lenssen, 2019)", and "Torch Spatiotemporal (Cini and Marisca, 2022)". While years are provided for citations, specific version numbers (e.g., Python 3.8, PyTorch 1.9) are not given, which is required for reproducibility.
Experiment Setup Yes We trained each model with early stopping on the validation set and a batch size of 64 samples for a maximum of 200 epochs each of 300 batches maximum. We used the Adam optimizer with an initial learning rate of 0.003 reduced by a factor γ = 0.25 every 50 epochs. The number of neurons dh in the layers of each model was set to 64 or 32 based on the validation error on each dataset. For Hi GP, the regularization coefficient λ was tuned and set to 0.25 based on the validation error on the Metr-LA dataset and simply rescaled for the other datasets to take into account the different magnitude of the input.