Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Search Strategy Generation for Branch and Bound Using Genetic Programming

Authors: Gwen Maudet, Grรฉgoire Danoy

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our first evaluation includes three types of primal hard problems, tested on instances similar to the training set and on larger instances. Our method is at most 2 percents slower than the best baseline and consistently outperforms SCIP, achieving an average speedup of 11.3 percents. Additionally, GP2S is tested on the MIPLIB 2017 dataset, generating multiple heuristics from different subsets of instances.
Researcher Affiliation Academia 1Sn T, University of Luxembourg, Esch-sur-Alzette, Luxembourg 2FSTM/DCS, Sn T, University of Luxembourg, Esch-sur-Alzette, Luxembourg EMAIL
Pseudocode No The paper describes the general Branch-and-Bound algorithm and the Genetic Programming framework conceptually, and illustrates selection, crossover, and mutation processes with figures. However, it does not present these processes or any other method in a structured pseudocode or algorithm block format.
Open Source Code Yes All simulations use the SCIP solver (Bolusani et al. 2024; Maher et al. 2016), with complete code available on the Git repository: https://gitlab.com/uniluxembourg/snt/pcog/ultrabo/searchstrategy-generation-for-branch-and-bound-using-geneticprogramming.git. Detailed integration of our method into SCIP is also provided within the framework.
Open Datasets Yes We conducted experiments using the MIPLIB 2017 collection (Gleixner et al. 2021)... We evaluate three challenging primal problem types, each with 50 instances: a training set for development, a test set with similar instances for effectiveness, and a transfer set with larger instances for generalizability. The problem types are: (i) FCMCNF (Hewitt, Nemhauser, and Savelsbergh 2009)...; (ii) MAXSAT (B ejar et al. 2009); (iii) GISP (Chmiela et al. 2021).
Dataset Splits Yes We evaluate three challenging primal problem types, each with 50 instances: a training set for development, a test set with similar instances for effectiveness, and a transfer set with larger instances for generalizability... From the reduced set, we randomly select 50 instances with a 10-second time limit for the GP process.
Hardware Specification Yes For the first phase, requiring GPU resources: 1 Xeon Gold 6132 @ 2.6GHz [14c/140W], 2 Tesla V100 SXM2 16G. For the second phase, which only required CPU resources: either 2 Xeon E5-2680v4 @ 2.4GHz [14c/120W] or 2 Xeon Gold 6132 @ 2.6GHz [14c/140W] nodes.
Software Dependencies Yes We used SCIP 9.1 along with the DEAP 1.4.1 package as the framework for implementing our GP algorithm... This baseline relies on PyTorch, and we used PyTorch 2.4.0 for our implementation.
Experiment Setup Yes the probabilities for crossover and mutation are set to Pmate = 0.9 and Pmutate = 0.1; the depth of population initialization is restricted by minimum and maximum limits of Dinit-min = 1 and Dinit-max = 17; the depth of branches generated during mutation is bounded by Dmut-min = 1 and Dmut-max = 5; the parameter related to the selection of the smallest tree size is set to Psize = 1.2... In the first phase... uses 50 individuals over 50 generations with a tournament size of k = 5... In the second phase... population size to 20, limit generations to 20, and use a fitness tournament size of k = 3.