Coherent Probabilistic Aggregate Queries on Long-horizon Forecasts
Authors: Prathamesh Deshpande, Sunita Sarawagi
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that our method improves forecast performance across both base level and unseen aggregates post inference on real datasets ranging three diverse domains. (Project URL) |
| Researcher Affiliation | Academia | Prathamesh Deshpande , Sunita Sarawagi Department of Computer Science and Engineering, IIT Bombay pratham@cse.iitb.ac.in, sunita@iitb.ac.in |
| Pseudocode | Yes | Algorithm 1 KLST Training and Inference Algorithm |
| Open Source Code | No | The paper includes a Project URL but this typically points to a project page, not necessarily the source code repository. There is no explicit statement about releasing code for the described methodology. |
| Open Datasets | Yes | Electricity This dataset contains national electricity load at Panama power system2. The dataset is collected at an hourly granularity from January 2015 to March 2020. Solar This dataset is on hourly photo-voltaic production of 137 stations and was used in [Salinas et al., 2019]. ETT (Hourly and 15-minute) This dataset contain a time-series of oil-temperature at an electrical transformer 3 collected from July 2016 to June 2017. The dataset is available at two granularities 15 minutes (ETT) and one hour (ETTH). Both were used in [Zhou et al., 2020]. |
| Dataset Splits | Yes | First we split the series into training, validation, and test sets of lengths ltrn, lval, and ltest respectively as follows: Dtrn = {(xt, yt)|t = 1, . . . , ltrn} Dval = {(xt, yt)|t = ltrn T + 1, . . . , ltrn + lval} Dtest = {(xt, yt)|t = ltrn + lval T + 1, . . . , n} |
| Hardware Specification | No | The paper describes the model architecture and training procedure but does not specify any hardware used for the experiments (e.g., GPU models, CPU types). |
| Software Dependencies | No | The paper mentions using 'Transformer-based NAR models' and 'Informer [Zhou et al., 2020]' as base models, and discusses implementations (e.g., 'our baseline Trans-NAR'). However, it does not provide specific version numbers for software libraries, frameworks (like PyTorch, TensorFlow), or programming languages used. |
| Experiment Setup | Yes | The choice of aggregate functions and range of K values are based on validation set and chosen from aggregate functions = { Sum, Slope} and K={6,12}. For all datasets, we assign weights α={10,0.5} to Sum and Slope aggregates respectively. Since direct predictions on Sum aggregate are often more accurate, we assign higher weight to it. |