Private Outsourced Bayesian Optimization
Authors: Dmitrii Kharkovskii, Zhongxiang Dai, Bryan Kian Hsiang Low
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically evaluate the performance of our PO-GP-UCB algorithm with synthetic and real-world datasets (Section 4). In this section, we empirically evaluate the performance of our PO-GP-UCB algorithm using four datasets including a synthetic GP dataset, a real-world loan applications dataset, a real-world property price dataset and, in Appendix A, the Branin-Hoo benchmark function. |
| Researcher Affiliation | Academia | Dmitrii Kharkovskii 1 Zhongxiang Dai 1 Bryan Kian Hsiang Low 1 [...] 1Department of Computer Science, National University of Singapore, Republic of Singapore. |
| Pseudocode | Yes | Algorithm 1 PO-GP-UCB (The curator part) and Algorithm 2 PO-GP-UCB (The modeler part) |
| Open Source Code | No | The paper does not provide any explicit statements about releasing its source code or links to a code repository for the methodology described. |
| Open Datasets | Yes | We use the public data from https://www.lendingclub.com/ and We use the public data from https://www.ura.gov.sg/real Estate IIWeb/transaction/ search.action. |
| Dataset Splits | No | The paper describes the total size of the datasets and how they are used in the Bayesian Optimization process, but it does not specify explicit training, validation, or test dataset splits in terms of percentages or sample counts. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU/CPU models, memory, or specific computing environments used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., programming language versions, library versions, or solver versions) needed to replicate the experiment. |
| Experiment Setup | Yes | We set the GP-UCB parameter δucb = 0.05 (Theorem 3) and normalize the inputs to have a maximal norm of 25 in all experiments. The GP hyperparameters are learned using maximum likelihood estimation (Rasmussen & Williams, 2006). The function to maximize is sampled from a GP with the GP hyperparameters µx = 0, l = 1.25, σ2 y = 1 and σ2 n = 10 5. We set the parameter r = 10 (Algorithm 1), DP parameter δ = 10 5 (Definition 2) and the GP-UCB parameter T = 50 for this experiment. |