Efficient Combinatorial Optimization via Heat Diffusion

Authors: Hengyuan Ma, Wenlian Lu, Jianfeng Feng

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed methodology demonstrates superior performance across a range of the most challenging and widely encountered combinatorial optimizations. Empirically, our framework demonstrates superior performance compared to advanced algorithms across a diverse range of combinatorial optimization instances, spanning quadratic to polynomial, binary to ternary, unconstrained to constrained, and discrete to mixed-variable scenarios. We apply our He O to a variety of NP-hard combinatorial optimization problems to demonstrate its broad applicability.
Researcher Affiliation Academia Hengyuan Ma Institute of Science and Technology for Brain-inspired Intelligence Fudan University Shanghai, China 200433 hangyuanma21@m.fudan.edu.cn
Pseudocode Yes Algorithm 1 Heat diffusion optimization (He O) ; Algorithm 2 Monte Carlo gradient estimation for combinatorial optimization (MCGE) ; Algorithm 3 Heat diffusion optimization (He O) with momentum ; Algorithm 4 Heat diffusion optimization (He O) for 3-SAT problem ; Algorithm 5 Heat diffusion optimization (He O) for training ternary-value neural network ; Algorithm 6 Heat diffusion optimization (He O) for linear regression variable selection ; Algorithm 7 Heat diffusion optimization (He O) for constrained binary optimization ; Algorithm 8 Refinement of the result of MVC
Open Source Code Yes The codebase of our study is available in https://github.com/Awaker Mhy/He O.
Open Datasets Yes on max-cut problems in the Biq Mac Library [36]1. https://biqmac.aau.at/biqmaclib.html ; on 3-SAT instance in SATLIB 2. https://www.cs.ubc.ca/~hoos/SATLIB/benchm.html ; on massive real world graph datasets 3. http://networkrepository.com/
Dataset Splits Yes We apply a five-fold cross-validation for all of methods.
Hardware Specification Yes All the experiments are conducted on a single NVIDIA RTX-3090 GPU (24GB) and an Intel(R) Xeon(R) Gold 6226R CPU (2.90GHz).
Software Dependencies No The paper mentions 'Py Torch s autograd' but does not specify its version or other software dependencies with specific version numbers, which is required for reproducibility.
Experiment Setup Yes We set the momentum κ = 0.9999, learning rate γ = 2 and iteration number T = 5000 for He O (Alg. 3). For MCGE without momentum, we set T = 50000, we set γ =1e-6, momentum κ = 0, and M = 10. For MCGE with momentum, we set momentum as 0.9.