Distributionally Robust Bayesian Optimization with $\varphi$-divergences
Authors: Hisham Husain, Vu Nguyen, Anton van den Hengel
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We then show experimentally that our method surpasses existing methods, attesting to the theoretical results. 5 Experiments Experimental setting. The experiments are repeated using 30 independent runs. |
| Researcher Affiliation | Industry | Hisham Husain Amazon hushisha@amazon.com Vu Nguyen Amazon vutngn@amazon.com Anton van den Hengel Amazon hengelah@amazon.com |
| Pseudocode | Yes | Algorithm 1 DRBO with ϕ-divergence |
| Open Source Code | No | We will release the Python implementation code in the final version. |
| Open Datasets | Yes | We consider the popular benchmark functions3 with different dimensions d. ... We perform an experiment on Wind Power dataset [8] and vary the context dimensions |C| {30, 100, 500} in Fig. 4. |
| Dataset Splits | No | No specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning was found. |
| Hardware Specification | No | No specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) were found for running its experiments. |
| Software Dependencies | No | The paper mentions 'Python implementation code' but does not provide specific software dependencies or version numbers (e.g., library names with version numbers). |
| Experiment Setup | Yes | Experimental setting. The experiments are repeated using 30 independent runs. We set |C| = 30 which should be sufficient to draw c iid q in one-dimensional space to compute Eqs. (4,5). We optimize the GP hyperparameter (e.g., learning rate) by maximizing the GP log marginal likelihood [43]. |