SymILO: A Symmetry-Aware Learning Framework for Integer Linear Optimization

Authors: Qian Chen, Tianjian Zhang, Linxin Yang, Qingyu Han, Akang Wang, Ruoyu Sun, Xiaodong Luo, Tsung-Hui Chang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on ILPs involving different symmetries and the computational results demonstrate that our symmetry-aware approach significantly outperforms three existing methods -achieving 50.3%, 66.5%, and 45.4% average improvements, respectively.
Researcher Affiliation Academia 1School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China 2Shenzhen Research Institute of Big Data, China 3School of Data Science, The Chinese University of Hong Kong, Shenzhen, China 4Shenzhen International Center For Industrial And Applied Mathematics, Shenzhen Research Institute of Big Data, China
Pseudocode Yes Algorithm 1 Alternating optimization
Open Source Code Yes The corresponding source code is available at https://github.com/Net Sys Opt/Sym ILO.
Open Datasets Yes We evaluate the proposed framework on four ILP benchmarks with certain symmetry, which consists of (i) two problems with symmetric groups: the item placement problem (IP) and the steel mill slab problem (SMSP), (ii) the periodic event scheduling problem (PESP) with cyclic group, and (iii) a modified variant of PESP (PESPD) which has a dihedral group. The first benchmark IP is from the Neur IPS ML4CO 2021 competition (Gasse et al., 2022). The SMSP benchmark is from Schaus et al. (2011). The last two benchmarks are from PESPlib Goerigk (2012).
Dataset Splits Yes For all training sets, 30% instances are used for validation.
Hardware Specification Yes The evaluation machine has one AMD EPYC 7H12 64-Core Processor @ 2.60GHz, 256GB RAM, and one NVIDIA Ge Force RTX 3080.
Software Dependencies Yes CPLEX 22.2.0 and Py Torch 2.0.1 (Paszke et al., 2019) are utilized in our experiments.
Experiment Setup Yes All models are trained with a batch size 16 for 50 epochs. The Adam optimizer with a learning rate of 0.001 is used, and other hyperparameters of the optimizer are set to their default values. The model with the smallest loss on the validation set is used for subsequent evaluations. Other training settings, such as the loss function and neural architectures, follow the configurations in Han et al. (2023). More details about the hyper-parameter tuning for the downstream tasks and software resources are shown in Section E.