Bayesian Optimization over Permutation Spaces

Authors: Aryan Deshwal, Syrine Belakaria, Janardhan Rao Doppa, Dae Hyun Kim6515-6523

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on multiple synthetic and real-world benchmarks show that both BOPS-T and BOPS-H perform better than the state-of-the-art BO algorithm for combinatorial spaces.
Researcher Affiliation Academia Aryan Deshwal, Syrine Belakaria, Janardhan Rao Doppa, Dae Hyun Kim School of EECS, Washington State University {aryan.deshwal, syrine.belakaria, jana.doppa, daehyun.kim}@wsu.edu
Pseudocode Yes Algorithm 1: Bayesian Optimization over Permutation Spaces (BOPS)
Open Source Code Yes The resources and source code of BOPS algorithms are available at https://github.com/aryandeshwal/BOPS/.
Open Datasets Yes We employ diverse and challenging benchmarks for blackbox optimization over permutations for our experiments. We have the following two synthetic benchmarks. 1) Quadratic assignment problem (QAP). QAPLIB (Burkard, Karisch, and Rendl 1997) is a popular library... and 2) Traveling salesman problem (TSP). TSP problems are derived from low-dimensional variants of the printed circuit board (PCB) problems from the TSPLIB library (Reinelt 1995). and Rodinia: A Benchmark suite for Heterogeneous Computing (Che et al. 2009).
Dataset Splits No The paper mentions using training and testing sets for evaluation, stating 'We plot the negative log-likelihood (NLL) of the three surrogate models on a testing set of 50 instances as a function of the increasing size of training data.' and 'Each experiment is replicated with 10 different training sets and each method is evaluated using the median of the NLL metric on 10 different test sets of 50 permutations each.' However, it does not explicitly define a validation set or provide specific details on the splitting methodology (e.g., percentages, k-fold cross-validation, or explicit split files) for reproducing the data partitioning into train/validation/test.
Hardware Specification No The paper mentions 'GPy Torch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration' but does not specify any exact GPU models, CPU types, or other detailed hardware specifications used for running the experiments.
Software Dependencies No The paper mentions software libraries used, such as 'GPy Torch (Gardner et al. 2018)', 'Bo Torch (Balandat et al. 2020) libraries', and 'the SDP relaxation based QAP solver code from (https://github.com/fsbravo/csdp)', but it does not provide specific version numbers for these software dependencies.
Experiment Setup Yes We used 10 restarts for local search based EI optimization for BOPSH. BOPS-T, BOPS-H, and COMBO are initialized with the same 20 random permutations in each experiment.