Dynamic Programming for Predict+Optimise
Authors: Emir Demirovi?, Peter J. Stuckey, Tias Guns, James Bailey, Christopher Leckie, Kotagiri Ramamohanarao, Jeffrey Chan1444-1451
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The aims of this section are: (a) to illustrate the applicability of our approach by comparing with state-of-the-art for the predict+optimise knapsack; (b) to show the scalability of our approach on two dynamic programming problems: knapsack and shortest path for directed-acyclic graphs; and (c) to demonstrate the strength of our approach by constructing benchmarks where the linear relaxation is not an accurate indicator for the quality of the predictions. |
| Researcher Affiliation | Academia | 1University of Melbourne, Australia 2Monash University, Australia 3Data61, Australia 4RMIT University, Australia 5Vrije Universiteit Brussel, Belgium |
| Pseudocode | Yes | Algorithm 1: Compute the max of two piecewise linear functions |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | The predict+optimise knapsack dataset is based on two years of real-life energy price data. The data was used in the ICON energy-aware scheduling competition and a number of publications (Dooren et al. 2017; Grimes et al. 2014). |
| Dataset Splits | Yes | Training and test sets are divided at a 70%-30% ratio. For other methods, we performed 5-fold hyperparameter tuning with regret as the measure. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for running experiments. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers. |
| Experiment Setup | Yes | Our coordinate descent approach optimises one parameter at a time. For other methods, we performed 5-fold hyperparameter tuning with regret as the measure. |