Graph Learning Assisted Multi-Objective Integer Programming
Authors: Yaoxin Wu, Wen Song, Zhiguang Cao, Jie Zhang, Abhishek Gupta, Mingyan Lin
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the experiments, we deploy the proposed graph learning method on a state-of-the-art ODA [29], and evaluate it on two benchmark problems with varying numbers of objectives. We compare the learned algorithm with various baselines in literature, where multiple metrics are adopted. In addition, we assess the generalization of the learned reduction rules to MOIP instances which possess more objectives or larger sizes than the ones in training. Finally, we conduct the ablation study to verify the efficacy of the proposed two-stage GNN and the learned reduction rule. |
| Researcher Affiliation | Academia | 1Nanyang Technological University 2Institute of Marine Science and Technology, Shandong University, China 3Singapore Institute of Manufacturing Technology, A*STAR |
| Pseudocode | Yes | The collection of states and the optimal actions is summarized by the pseudocode in Appendix A.5. |
| Open Source Code | No | The paper does not provide an explicit statement or link to the open-source code for its described methodology. While the self-assessment checklist indicates 'Yes' for code availability, no concrete access is given in the main text or appendices. |
| Open Datasets | Yes | We conduct experiments on two widely used benchmarks in the domain of MOIP, i.e., multi-objective knapsack problem (MOKP) and assignment problem (MOAP) [31], respectively.4 http://home.ku.edu.tr/~moolibrary/ |
| Dataset Splits | Yes | We leave 1% data in the training set for validation. |
| Hardware Specification | Yes | Except for GNN with GPU, the other algorithmic procedure is executed with one i9-10940X CPU@3.30GHz during testing. For training, ... execute the training with one GeForce RTX 2080 Ti GPU. |
| Software Dependencies | Yes | We collect the training set by solving the 100 instances with the ODA [29] and Gurobi solver (v9.5.1) [46] |
| Experiment Setup | Yes | For training, we set the batch size to 64 and initial learning rate to 0.001. The learning rate decays by 0.2 once the validation loss is not decreased in 8 successive epochs. We employ early stopping to end the training when there is no improvement of validation performance in 16 successive epochs. We always retrieve the GNN model with the lowest validation loss along epochs. We employ Adam optimizer [47] to minimize the loss and execute the training with one GeForce RTX 2080 Ti GPU. |