Learning Hard Optimization Problems: A Data Generation Perspective

Authors: James Kotary, Ferdinando Fioretto, Pascal Van Hentenryck

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This paper demonstrates this critical challenge, connects the variation of the training data to the ability of a model to approximate it, and proposes a method for producing (exact or approximate) solutions to optimization problems that are more amenable to supervised learning tasks. The effectiveness of the method is tested on hard non-linear nonconvex and discrete combinatorial problems.
Researcher Affiliation Academia James Kotary Syracuse University Syracuse, NY 13244 jkotary@syr.edu Ferdinando Fioretto Syracuse University Syracuse, NY 13244 ffiorett@syr.edu Pascal Van Hentenryck Georgia Institute of Technology Atlanta, GA 30332 pvh@isye.gatech.edu
Pseudocode Yes Algorithm 1: Opt. Data Generation
Open Source Code No The paper does not include an unambiguous statement from the authors that they are releasing the code for the work described in this paper, nor does it provide a direct link to a source-code repository for their implementation.
Open Datasets Yes The experiments examine the proposed models on a variety of problems from the JSPLIB library [28]. ... Pegase-89, which is a coarse aggregation of the French system and IEEE-118 and IEEE-300, from the NESTA library [10].
Dataset Splits No The paper mentions training and testing, but it does not specify details of a separate validation set (e.g., percentages, sample counts, or explicit mention of a validation split) used to tune hyperparameters during training.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions "IBM CP Optimizer constraint programming software", "Julia package PowerModels.jl", and "nonlinear solver IPOPT", but it does not provide specific version numbers for any of these software components.
Experiment Setup No The paper mentions a "solving time limit of 1800 seconds" for data generation and that "observations are robust over a wide range of hyper-parameters adopted to train the learning models" (with details in Appendix D), but it does not provide concrete hyperparameter values or detailed training configurations for the learning models in the main text.