Interior Point Solving for LP-based prediction+optimisation
Authors: Jayanta Mandi, Tias Guns
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally our empirical experiments demonstrate our approach performs as good as if not better than the state-of-the-art QPTL (Quadratic Programming task loss) formulation of Wilder et al. [29] and SPO approach of Elmachtoub and Grigas [12]. |
| Researcher Affiliation | Academia | Jayanta Mandi Data Analytics Laboratory Vrije Universiteit Brussel jayanta.mandi@vub.be |
| Pseudocode | Yes | Algorithm 1: End-to-end training of an LP (relaxed MILP) problem |
| Open Source Code | Yes | 1implementation is available at https://github.com/Jay Man91/Neur IPSIntopt |
| Open Datasets | Yes | We use the dataset of Rafiei and Adeli [26] |
| Dataset Splits | Yes | Out of the available 789 days, 552 are used for training, 60 for validation and 177 for testing. |
| Hardware Specification | Yes | All experiments were executed on a laptop with 8 Intel Core i7-8550U CPU @ 1.80GHz and 16 Gb of RAM. |
| Software Dependencies | Yes | The neural network and the MILP model have been implemented using Py Torch 1.5.0 [23] and Gurobipy 9.0 [14], respectively. The homogeneous algorithm implementation is based on the one of the Sci Py 1.4.1 Optimize module. |
| Experiment Setup | Yes | We treat the learning rate, epochs and weight decay as hyperparameters, selected by an initial random search followed by grid search on the validation set. For the proposed Int Opt approach, the values of the damping factor and the λ cut-off are chosen by grid search. |