Batch Bayesian Optimization on Permutations using the Acquisition Weighted Kernel

Authors: Changyong Oh, Roberto Bondesan, Efstratios Gavves, Max Welling

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we assess the method on several standard combinatorial problems involving permutations such as quadratic assignment, flowshop scheduling and the traveling salesman, as well as on a structure learning task. 5 Experiments We empirically demonstrate the benefit of LAW on many optimization problems on permutations.
Researcher Affiliation Collaboration Changyong Oh QUv A lab, Iv I University of Amsterdam changyong.oh0224@gmail.com Roberto Bondesan Qualcomm AI Research rbondesa@qti.qualcomm.com Efstratios Gavves QUv A lab, Iv I University of Amsterdam egavves@uva.nl Max Welling QUv A lab, Iv I University of Amsterdam m.welling@uva.nl
Pseudocode Yes Algorithm 1 Batch Acquisition by LAW
Open Source Code Yes 3The code is available at https://github.com/ChangYong-Oh/LAW2ORDER
Open Datasets Yes We consider three types of combinatorial optimization on permutations, Quadratic Assignment Problems(QAP), Flowshop Scheduling Problems(FSP) and Traveling Salesman Problems(TSP) (See Supp.Subsec. F.3 for data source). On data generated from 4 real-world BNs[Scu10, SGG19],
Dataset Splits No The paper describes initial data generation and model training procedures but does not explicitly provide train/validation/test dataset splits.
Hardware Specification No The authors directly state 'No' to the question 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?' in their self-reflection section.
Software Dependencies No The paper mentions 'PyTorch[PGC+17]' and 'Pymoo[BD20]' but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes GP surrogate models are trained with output normalized evaluation data by optimizing the marginal likelihood until convergence with 10 different random initializations. We use the Adam optimizer[KB14] with default Py Torch[PGC+17] settings except for the learning rate of 0.1. For each benchmark, all methods share 5 randomly generated initial evaluation data sets of 20 points and for each initial evaluation data set, each method is run three times 15 runs in total. LAW2ORDER, q-EI and q-EST is run on each of these 5 sets using a batch size 20.