Sequential and Parallel Constrained Max-value Entropy Search via Information Lower Bound
Authors: Shion Takeno, Tomoyuki Tamura, Kazuki Shitara, Masayuki Karasuyama
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of CMES-IBO by several benchmark functions and real-world problems. 6. Experiments We demonstrate the performance of sequential optimization by comparing with CMES, EIC (Gelbart et al., 2014), a TS-based method referred to as TSC, and PESC (Hern andez-Lobato et al., 2015) in Spearmint... |
| Researcher Affiliation | Academia | 1Department of Computer Science, Nagoya Institute of Technology, Aichi, Japan 2Center for Advanced Intelligence Project, RIKEN, Tokyo, Japan 3Department of Physical Science and Engineering, Nagoya Institute of Technology, Aichi, Japan 4Joining and Welding Research Institute, Osaka University, Osaka, Japan 5Nanostructures Research Laboratory, Japan Fine Ceramics Center, Aichi, Japan. |
| Pseudocode | Yes | Algorithm 1 Sequentialand parallel CMES-IBO. |
| Open Source Code | No | Information is insufficient. The paper mentions using third-party open-source libraries like Spearmint and GPy, but does not provide specific access to the authors' own implementation code for CMES-IBO or CMES. |
| Open Datasets | Yes | We fitted the two-layer CNN to the CIFAR10 dataset (Krizhevsky & Hinton, 2009) |
| Dataset Splits | No | Information is insufficient. The paper mentions the CIFAR10 dataset but does not provide specific training, validation, and test split percentages, sample counts, or references to predefined splits used for reproducibility. |
| Hardware Specification | No | Information is insufficient. The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory specifications) used to run the experiments. |
| Software Dependencies | No | Information is insufficient. The paper mentions software such as Spearmint, GPy, PyTorch, and NLopt, but does not provide specific version numbers for these software dependencies, which is necessary for reproducibility. |
| Experiment Setup | Yes | The sample size of all MC approximations is set as 10. For the kernel function in GPs, we used a linear combination of the linear kernel k LIN : X X R and RBF kernel k RBF : X X R defined as σ2 LINk LIN(x, x )+σ2 RBFk RBF(x, x ), where σ2 LIN and σ2 RBF are updated by the marginal likelihood maximization every 5 iteration... The numbers of initial points are set as 3 for the GP-derived synthetic function, 5 for the two-dimensional benchmark functions, 5d for the Hartmann6 function, and 25 for others. |