Conformal Prediction for Time Series with Modern Hopfield Networks

Authors: Andreas Auer, Martin Gauch, Daniel Klotz, Sepp Hochreiter

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In experiments, we demonstrate that our new approach outperforms stateof-the-art conformal prediction methods on multiple real-world time series datasets from four different domains.
Researcher Affiliation Collaboration ELLIS Unit Linz and LIT AI Lab, Institute for Machine Learning, Johannes Kepler University Linz, Austria Google Research, Z urich, Switzerland
Pseudocode No The paper provides equations and descriptive text for its methods but does not include a distinct block of pseudocode or a formally labeled algorithm.
Open Source Code Yes The code and data to reproduce all of our experiments are available at https://github.com/ ml-jku/Hop CPT.
Open Datasets Yes Datasets. We use datasets from four different domains: (a) Three solar radiation datasets from the US National Solar Radiation Database (Sengupta et al., 2018). (b) An air quality dataset from Beijing, China (Zhang et al., 2017). (c) Sap flow measurements from the Sapfluxnet data project (Poyatos et al., 2021). (d) Streamflow, a dataset of water flow measurements and corresponding meteorologic observations from 531 rivers across the continental United States (Newman et al., 2015; Addor et al., 2017).
Dataset Splits Yes We partition the split conformal calibration data into training and validation sets.4 As Enb PI requires only the k past points, we used the full calibration set minus these k points for validation, so that it could fully exploit the available data.
Hardware Specification Yes most of them were executed on a machine with an Nvidia P100 GPU and a Xeon E5-2698 CPU.
Software Dependencies Yes The random forest and Light GBM models are implemented with the darts library (Herzen et al., 2022), the ridge regression model with sklearn (Pedregosa et al., 2011). For the LSTM model, we instead train a global model on all time series of a dataset... The LSTM is implemented with Py Torch (Paszke et al., 2019). Adam W (Loshchilov & Hutter, 2019) with standard parameters (β1 = 0.9, β2 = 0.999, δ = 0.01) was used as optimizer.
Experiment Setup Yes Table 2: Parameters used in the hyperparameter search of the uncertainty models. Method Parameter Value Hop CPT Learning Rate 0.01, 0.001 Dropout 0, 0.25, 0.5 Time Encode yes/no... Learning Rate 0.005, 0.001, 0.0001 Dropout 0.1, 0.25, 0.5 Batch Size 512, 256 Hidden Size 64, 128, 256