Predict+Optimise with Ranking Objectives: Exhaustively Learning Linear Functions

Authors: Emir Demirovic, Peter J. Stuckey, James Bailey, Jeffrey Chan, Christopher Leckie, Kotagiri Ramamohanarao, Tias Guns

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate the applicability of our framework for the particular case of the unit-weighted knapsack predict+optimise problem and evaluate on benchmarks from the literature. We address the first question by advancing the theoretical foundations of predict+optimise by characterising the properties and computational complexity in relation to the machine learning algorithm and optimisation problem. To the best of our knowledge, our framework is the first of its kind for the predict+optimise setting. We provide two sets of experiments. The first demonstrates the computational benefits of techniques from Section 4.3. The second compares our approach with the state-of-the-art.
Researcher Affiliation Academia 1University of Melbourne, Australia 2Monash University, Australia 3Data61, Australia 4RMIT University, Australia 5Vrije Universiteit Brussel, Belgium
Pseudocode No The paper describes algorithms conceptually but does not present any formal pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about making its source code available or links to a code repository.
Open Datasets Yes We experiment with artificial and real-life energy-price datasets as used in [Demirovi c et al., 2019] with the unit-weighted knapsack predict+optimise problem [Gilmore and Gomory, 1966], which corresponds to the project-funding problem introduced in the examples. The real-life datasets contain two years of historical energy price data from the day-ahead market of SEM-O, the Irish Single Electricity Market Operator. The data was used in the ICON energy-aware scheduling competition and a number of publications, e.g. [Grimes et al., 2014; Dooren et al., 2017].
Dataset Splits Yes Training and test sets are divided at a 70%-30% ratio.
Hardware Specification No The paper does not specify any hardware used for running the experiments, such as CPU or GPU models.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup Yes The capacity was set to 10% of total number of optimisation parameters, i.e. b = 4 in Example 2. Initial coefficients were based on SVM-s. For other methods, we performed 5-fold hyperparameter tuning with regret as the measure.