Neural Multi-Objective Combinatorial Optimization with Diversity Enhancement
Authors: Jinbiao Chen, Zizhen Zhang, Zhiguang Cao, Yaoxin Wu, Yining Ma, Te Ye, Jiahai Wang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on classic MOCO problems show that our NHDE is able to generate a Pareto front with higher diversity, thereby achieving superior overall performance. |
| Researcher Affiliation | Academia | Jinbiao Chen1, Zizhen Zhang1, Zhiguang Cao2, Yaoxin Wu3, Yining Ma4, Te Ye1, and Jiahai Wang1,5,6, 1School of Computer Science and Engineering, Sun Yat-sen University, P.R. China 2School of Computing and Information Systems, Singapore Management University, Singapore 3Department of Industrial Engineering & Innovation Sciences, Eindhoven University of Technology, Netherlands 4Department of Industrial Systems Engineering & Management, National University of Singapore, Singapore 5Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, Sun Yat-sen University, P.R. China 6Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, P.R. China |
| Pseudocode | Yes | Algorithm 1 Training algorithm of NHDE-P |
| Open Source Code | Yes | Our code is publicly available3. 3https://github.com/bill-cjb/NHDE |
| Open Datasets | Yes | We evaluate the proposed NHDE on three typical MOCO problems that are commonly studied in the neural MOCO literature [13 15], namely the multi-objective traveling salesman problem (MOTSP) [51], multi-objective capacitated vehicle routing problem (MOCVRP) [3], and multi-objective knapsack problem (MOKP) [52]. ... Three commonly used benchmark instances developed from TSPLIB [56], i.e., Kro AB100, Kro AB150, and Kro AB200, are also tested. |
| Dataset Splits | No | The paper mentions training on randomly generated instances and evaluating on test instances, but does not specify a distinct validation split or its size/percentage. |
| Hardware Specification | Yes | All the methods are tested with an RTX 3090 GPU and an Intel Xeon 4216 CPU. |
| Software Dependencies | No | The paper mentions using the Adam optimizer and that PPLS/D-C is 'implemented in Python', but it does not provide specific version numbers for any software components (e.g., Python, PyTorch, CUDA, specific library versions). |
| Experiment Setup | Yes | We train NHDE-P with 200 epochs, each containing 5,000 randomly generated instances. We use batch size B =64 and the Adam [53] optimizer with learning rate 10 4 (10 5 for MOKP) and weight decay 10 6. During training, N =20 weights are sampled for each instance. During inference, we generate N = 40 and N = 210 uniformly distributed weights for M = 2 and M =3, respectively, which are then shuffled so as to counteract biases. The diversity factors linearly shift through the N subproblems from (1,0) to (0,1), which implies a gradual focus from achieving convergence (scalar objective) with a few solutions to ensuring comprehensive performance with a multitude of solutions. We set K =20 and J =200. |