Relaxing the Additivity Constraints in Decentralized No-Regret High-Dimensional Bayesian Optimization

Authors: Anthony Bardou, Patrick Thiran, Thomas Begin

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we detail the experiments carried out to evaluate the empirical performance of Du MBO. An open-source implementation of Du MBO, based on Bo Torch (Balandat et al., 2020), is available on Git Hub1. [...] Table 2 gathers the averaged results that were obtained. Additionally, we made wall-clock time measurements on some experiments and we discuss them in Appendix J.
Researcher Affiliation Academia Anthony Bardou & Patrick Thiran IC, EPFL Lausanne, Switzerland {anthony.bardou,patrick.thiran}@epfl.ch Thomas Begin ENS Lyon, UCBL, CNRS, LIP Lyon, France thomas.begin@ens-lyon.fr
Pseudocode Yes The detailed algorithm (Algorithm 1), as well as a discussion about its time complexity, are provided in Appendix E.
Open Source Code Yes An open-source implementation of Du MBO, based on Bo Torch (Balandat et al., 2020), is available on Git Hub1. 1https://github.com/abardou/dumbo
Open Datasets Yes Our benchmark comprises four synthetic functions and three real-world experiments. [...] The 2d Six-Hump Camel (SHC), the 6d Hartmann, the 24d Powell and the 100d Rastrigin. [...] We chose to compute the likelihood of the galaxy clustering in Chuang et al. (2013) from the Data Release 9 (DR9) CMASS sample of the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS).
Dataset Splits No To condition the ith GP, we consider the data set Si = n xj Vi, Yj,i o
Hardware Specification Yes The measurements were taken using a server equipped with two Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz, with 14 cores (28 threads) each.
Software Dependencies No An open-source implementation of Du MBO, based on Bo Torch (Balandat et al., 2020), is available on Git Hub1.
Experiment Setup Yes Finally, note that we chose a Mat ern kernel (with its hyperparameter ν = 5/2) for each GP involved in these experiments. [...] We propose to proceed by gradient ascent (e.g. with ADAM (Kingma & Ba, 2015))