Neur2RO: Neural Two-Stage Robust Optimization

Authors: Justin Dumouchelle, Esther Julien, Jannis Kurtz, Elias Boutros Khalil

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We propose Neur2RO, an efficient machine learning-driven instantiation of column-and-constraint generation (CCG), a classical iterative algorithm for 2RO. ... Embedding our neural network into CCG yields high-quality solutions quickly as evidenced by experiments on two 2RO benchmarks, knapsack and capital budgeting. ... Our code and data are available at https://github.com/khalil-research/Neur2RO. ... 4 EXPERIMENTAL SETUP ... 5 EXPERIMENTAL RESULTS
Researcher Affiliation Academia Justin Dumouchelle University of Toronto Esther Julien TU Delft Jannis Kurtz University of Amsterdam Elias B. Khalil University of Toronto
Pseudocode Yes Algorithm 1 Column-and-Constraint Generation
Open Source Code Yes Our code and data are available at https://github.com/khalil-research/Neur2RO.
Open Datasets Yes We benchmark Neur2RO on two 2RO problems from the literature, namely a two-stage knapsack problem and the capital budgeting problem. In both cases, our instances are as large or larger than considered in the literature. The two-stage knapsack problem is in the first stage a classical knapsack problem. The second stage has decisions for responding to an uncertain profit degradation. The capital budgeting problem is described in the introduction. For a detailed description of these problems, see Appendix A. Below we briefly detail each problem. Knapsack. For the knapsack problem, we use the same instances as in Arslan & Detienne (2022), which have been inspired by Ben-Tal et al. (2009). ... Capital budgeting. These problem instances are generated similar to Subramanyam et al. (2020).
Dataset Splits Yes The dataset is split into 200,000 and 50,000 samples for training and validation, respectively.
Hardware Specification Yes All experiments were run on a computing cluster with an Intel Xeon CPU E5-2683 and Nvidia Tesla P100 GPU with 64GB of RAM (for training).
Software Dependencies Yes Pytorch 1.12.1 (Paszke et al., 2019) was used for all learning models. Gurobi 10.0.2 (Gurobi Optimization, LLC, 2023) was used as the MILP solver and gurobi-machinelearning 1.3.0 was used to embed the neural networks into MILPs.
Experiment Setup Yes We train one size-independent model for each problem for 500 epochs. The data collection times, training times, and total times (in seconds) are 2,162, 3,789, and 5,951 for knapsack and 3,212, 2,195, and 5,407 for capital budgeting. ... Appendix I provides full detail on model hyperparameters and training. ... Table 12 reports the hyperparameters for each model. ... Batch size 256 Learning rate 0.001 Dropout 0 Loss function MSELoss Optimizer Adam