A Deep Reinforcement Learning Framework for Column Generation

Authors: Cheng Chi, Amine Aboussalah, Elias Khalil, Juyoung Wang, Zoha Sherkat-Masoumi

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform an extensive set of experiments using the publicly available BPPLIB benchmark for CSP and Solomon benchmark for VRPTW. RLCG converges faster and reduces the number of CG iterations by 22.4% for CSP and 40.9% for VRPTW on average compared to a commonly used greedy policy.
Researcher Affiliation Academia Cheng Chi University of Toronto Amine Mohamed Aboussalah New York University Elias B. Khalil University of Toronto Juyoung Wang University of Toronto Zoha Sherkat-Masoumi University of Toronto
Pseudocode Yes Algorithm 1 shows how a trained RLCG agent is applied to solve a problem instance.
Open Source Code Yes Our code is available at this link.
Open Datasets Yes We use BPPLIB Delorme et al. [2018], a widely used collection of benchmark instances for binary packing and cutting stock problems, which includes a number of instances proven to be difficult to solve [Delorme and Iori, 2020, Wei et al., 2020, Martinovic et al., 2020].
Dataset Splits Yes Our training set has instances with n = 50, 100, 200. The remaining instances with n = 50, 100, 200, 750 are split into a validation set and a testing set, with hard instances n = 750 only appearing in the testing set, as it is very expensive to solve such large instances during training.
Hardware Specification Yes Our computing environment is described in Appendix Section C.
Software Dependencies No The paper does not explicitly state specific version numbers for software dependencies or libraries used in the experiments.
Experiment Setup Yes We tune the main hyperparameters, in the reward function (1), the exploration probability in DQN, γ the discount factor, and the learning rate lr in gradient descent. ... The best configuration is: = 300, = 0.05, γ = 0.9 and lr = 0.001.