Learning To Scale Mixed-Integer Programs

Authors: Timo Berthold, Gregor Hendel3661-3668

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the predictive power of a random forest approach and a linear regressor that learns the (square-root of the) difference in attention level. It turns out that the resulting classification not only reduces various types of numerical errors by large margins, but it also improves the performance of the dual simplex algorithm. The learned model has been implemented within the FICO Xpress MIP solver and it is used by default since release 8.9, May 2020, to determine the scaling algorithm Xpress applies before solving an LP or a MIP.
Researcher Affiliation Industry Timo Berthold, Gregor Hendel FICO Xpress Optimization, Fair Isaac Germany Gmb H, Takustr. 7, 14195 Berlin, Germany timoberthold@fico.com, gregorhendel@fico.com
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper states that the learned model has been implemented within the FICO Xpress MIP solver and is used by default, but it does not provide any link or explicit statement about the release of its source code to the public.
Open Datasets Yes This test bed comprises publicly available problems from the MIPLIB 2017 (Gleixner et al. 2019) benchmark set as well as confidential customer problems.
Dataset Splits No The paper states: "For the training of the regression models, we split our data set randomly into 60 % (1941) training and 40 % (1294) test set." It does not explicitly mention a separate validation set split.
Hardware Specification No The paper mentions solving instances with 'up to 8 threads' and 'up to 20 threads' for parallel processing, but it does not specify any particular CPU or GPU models, or other hardware specifications used for the experiments.
Software Dependencies Yes The learned model has been implemented within the FICO Xpress MIP solver and it is used by default since release 8.9, May 2020
Experiment Setup Yes We train a random forest with 500 trees. After the model training, we pick the thresholds τ by searching the domain [ 1, 1] in 0.01-steps to finish our classifier individually for the linear regression and random forest models on the training set. The values of τ that minimize the attention level on the training set are -0.05 for the linear model and 0.01 for the random forest model. Our final implementation uses as coefficients (f) = 0.014f coef + 0.07f obj + 0.008f rhs, τ = 0.054.