Smart Predict-and-Optimize for Hard Combinatorial Optimization Problems
Authors: Jayanta Mandi, Emir Demirovi?, Peter J. Stuckey, Tias Guns1603-1610
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experiment with weighted knapsack problems as well as complex scheduling problems, and show for the first time that a predict-and-optimize approach can successfully be used on large-scale combinatorial optimization problems. |
| Researcher Affiliation | Academia | 1Data Analytics Laboratory, Vrije Universiteit Brussel {jayanta.mandi,tias.guns}@vub.be 2University of Melbourne {emir.demirovic,pstuckey}@unimelb.edu.au |
| Pseudocode | Yes | Algorithm 1: Stochastic Batch gradient descent for the two-stage learning for regression tasks (batchsize:N) and learning rate α Algorithm 2: Stochastic Batch gradient descent for the SPO approach for regression tasks (batchsize:N) and learning rate α |
| Open Source Code | Yes | The code of our experiments is available at https://github.com/ Jay Man91/aaai predit then optimize.git |
| Open Datasets | Yes | Our data is drawn from the Irish Single Electricity Market Operator (SEMO) (Ifrim, O Sullivan, and Simonis 2012). |
| Dataset Splits | Yes | For the experiments, we divide our data into three sets: training (70%), validation (10%) and test (20%), and evaluate the performance by measuring regret on the test set. |
| Hardware Specification | Yes | Experiments were run on Intel(R) Xeon(R) CPU E3-1225 v5 @ 3.30GHz processors with 32GB memory 2. |
| Software Dependencies | No | The paper mentions software like "Gurobi optimization-solver" and "Pytorch" but does not provide specific version numbers for these dependencies. |
| Experiment Setup | No | The paper describes how hyperparameters (learning rate and momentum) were selected via grid search and states the batching strategy ("each batch corresponds to one day"), but it does not provide specific numerical values for the chosen hyperparameters or other system-level training configurations such as optimizer settings or epochs in the main text. |