Randomized Gaussian Process Upper Confidence Bound with Tighter Bayesian Regret Bounds

Authors: Shion Takeno, Yu Inatsu, Masayuki Karasuyama

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we demonstrate the effectiveness of IRGP-UCB through extensive experiments. We demonstrate the experimental results on synthetic and benchmark functions and the materials dataset provided in (Liang et al., 2021).
Researcher Affiliation Collaboration 1Department of Computer Science, Nagoya Institute of Technology, Aichi, Japan 2RIKEN AIP, Tokyo, Japan.
Pseudocode Yes Algorithm 1 IRGP-UCB
Open Source Code No The paper does not provide an explicit statement or link for the open-sourcing of their code.
Open Datasets Yes We use synthetic functions generated from GP(0, k), where k is Gaussian kernel with a length scale parameter ℓ= 0.1 and the input dimension d = 3. We set the noise variance σ2 = 10 4. The input domain consists of equally divided points in [0, 0.9], i.e., X = {0, 0.1, . . . , 0.9}d and |X| = 1000. We employ three benchmark functions called Holder table (d = 2), Cross in tray(d = 2), and Ackley (d = 4) functions, whose analytical forms are shown at https://www.sfu. ca/~ssurjano/optimization.html. This section provides the experimental results on the materials datasets provided in (Liang et al., 2021). In the perovskite dataset (Sun et al., 2021), we optimize environmental stability with respect to composition parameters for halide perovskite (d = 3 and |X| = 94). In the P3HT/CNT dataset (Bash et al., 2021), we optimize electrical conductivity with respect to composition parameters for carbon nanotube polymer blend (d = 5 and |X| = 178). In the Ag NP dataset (Mekki-Berrada et al., 2021), we optimize the absorbance spectrum of synthesized silver nanoparticles with respect to processing parameters for synthesizing triangular nanoprisms (d = 5 and |X| = 164).
Dataset Splits No The paper mentions initial training datasets, but does not explicitly specify train/validation/test splits using percentages or absolute counts for any of the datasets.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using random Fourier features and Monte Carlo estimation, but does not provide specific software names with version numbers.
Experiment Setup Yes For the posterior sampling in TS, MES, and JES, we used random Fourier features (Rahimi & Recht, 2008). For Monte Carlo estimation in MES and JES, we used ten samples. We set the noise variance σ2 = 10 4. We used the Gaussian kernel with automatic relevance determination, whose hyperparameter was selected by marginal likelihood maximization per 5 iterations (Rasmussen & Williams, 2005). For GP-UCB, we set the confidence parameter as βt = 0.2d log(2t), which is the heuristics used in (Kandasamy et al., 2015; 2017). For RGP-UCB, we set ζt Gamma(κt, θ = 1) with κt = 0.2d log(2t) since E[ζt] must have the same order as βt (note that E[ζt] = θκt). For IRGP-UCB, we set s = d/2 and λ = 1/2. We set the initial dataset size |D0| = 2 as with (Liang et al., 2021). Thus, we optimized the hyperparameters of the RBF kernel in each iteration to avoid repeatedly obtaining samples using an inappropriate hyperparameter.