SnAKe: Bayesian Optimization with Pathwise Exploration

Authors: Jose Pablo Folch, Shiqiang Zhang, Robert Lee, Behrang Shafei, David Walz, Calvin Tsay, Mark van der Wilk, Ruth Misener

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For all experimental results we report the mean and the standard deviation over 25 experimental runs. We give the full implementation details and results in Appendix E and F respectively. Classical BO methods are implemented using Bo Torch [3] and GPy Torch [11]. The code to replicate all results is available online at https://github.com/cog-imperial/Sn AKe.
Researcher Affiliation Collaboration Jose Pablo Folch Imperial College London Shiqiang Zhang Imperial College London Robert M Lee BASF SE Ludwigshen, Germany Behrang Shafei BASF SE Ludwigshafen, Germany Calvin Tsay Imperial College London Mark van der Wilk Imperial College London Ruth Misener Imperial College London
Pseudocode Yes Algorithm 1 -Point Deletion (page 5) and Algorithm 2 Sn AKe (page 6) are explicitly labeled and structured algorithm blocks.
Open Source Code Yes The code to replicate all results is available online at https://github.com/cog-imperial/Sn AKe.
Open Datasets No The paper uses synthetic functions (e.g., Branin2D, Hartmann3D) and refers to the 'Sn Ar chemistry benchmark [18]' and 'Schekel benchmark function (as in [41])'. While these are known functions/benchmarks, the paper does not provide concrete access information (specific link, DOI, or repository) for the *data instances* used for training, nor does it explicitly state that the generated data is publicly available.
Dataset Splits No The paper does not explicitly provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for train/validation/test sets.
Hardware Specification Yes We were able to comfortably run all experiments in a CPU (2.5 GHz Quad-Core Intel Core i7), where Sn AKe shared a wall-time similar to Local Penalization methods.
Software Dependencies No The paper mentions software like Bo Torch [3], GPy Torch [11], PyTorch [36], and the Summit package [10]. However, it does not provide specific version numbers for these software dependencies within the main text or appendices.
Experiment Setup Yes For all experimental results we report the mean and the standard deviation over 25 experimental runs. We give the full implementation details and results in Appendix E and F respectively. [...] We set a delay of tdelay = 25, and optimize for T = 100 iterations. [...] For every experiment, T = 250, and we limit the x-axis to the maximum cost achieved by Sn AKe or Random. [...] In all experiments, we examine Sn AKe for = 0, 0.1, and 1. We further introduce a parameter-free alternative by adaptively selecting to be the smallest length scale from the GP s kernel, and denote it -Sn AKe.