Learning to Solve Quadratic Unconstrained Binary Optimization in a Classification Way

Authors: Ming Chen, Jie Chun, Shang Xiang, Luona Wei, Yonghao Du, Qian Wan, Yuning Chen, Yingwu Chen

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness of the proposed VCM and GST. A well-trained VCM can directly generate near-optimal solutions for QUBO within milliseconds and exhibit remarkable generalization capabilities across both instance sizes and data distributions. For example, a VCM trained on instances of size 10 can produce near-optimal results for instances of size 7,000 in milliseconds.
Researcher Affiliation Academia Ming Chen1 , Jie Chun 1 , Shang Xiang 2, Luona Wei3, Yonghao Du1, Qian Wan4, Yuning Chen1 , Yingwu Chen1 1College of Systems Engineering, National University of Defense Technology 2School of Public Administration, Xiangtan University 3College of Electronics and Information Engineering, South-Central Minzu University 4National Engineering Research Center of Educational Big Data, Central China Normal University
Pseudocode Yes Algorithm 1 The Greedy-guided Self Trainer; Algorithm 2 The Batch Greedy Flip (BGF) algorithm
Open Source Code Yes 3The code is available at https://github.com/cmself100/VCM-QUBO.
Open Datasets Yes The datasets used in our experiments include generated instances (G), benchmarks (B), and well-known instances (P)... The B set is B2500(10) consisting of ten ORLIB instances of size 2500 [45]. The P set includes 21 very-large instances [46] including P3000(5), P4000(5), P5000(5), P6000(3), and P7000(3).
Dataset Splits Yes The datasets used in our experiments include generated instances (G), benchmarks (B), and well-known instances (P)... VCM is trained for 100 epochs under four sets of small-size instances (with 10, 20, 50, and 100 variables)... We use the B set and P set to validate the performance of the trained VCMs.
Hardware Specification Yes Experiments were run on an NVIDIA Ge Force RTX 3090 and an Intel i9-9900K CPU with 64GB RAM and Ubuntu 18.04 using Pytorch 1.90 in a Python 3.7 environment.
Software Dependencies Yes Ubuntu 18.04 using Pytorch 1.90 in a Python 3.7 environment.
Experiment Setup Yes Following the Parameters Study (see Appendix F), the default values for VCM parameters are set as h = 128, α = 4, and d = 40. VCM is trained for 100 epochs under four sets of small-size instances (with 10, 20, 50, and 100 variables) with a batch size of 512, resulting in 400 VCMs. Each model is initialized with Xavier initialization [47], and the Adam optimizer is applied with a 10 4 learning rate and 0.975 decay factor.