Top-k Ranking Bayesian Optimization
Authors: Quoc Phong Nguyen, Sebastian Tay, Bryan Kian Hsiang Low, Patrick Jaillet9135-9143
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically evaluate the performance of MPES using several synthetic benchmark functions, CIFAR-10 dataset, and SUSHI preference dataset. |
| Researcher Affiliation | Academia | 1Dept. of Computer Science, National University of Singapore, Republic of Singapore 2Dept. of Electrical Engineering and Computer Science, MIT, USA |
| Pseudocode | No | The paper describes procedures and methods but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/sebtsh/Top-k-Ranking-Bayesian Optimization. |
| Open Datasets | Yes | We empirically evaluate the performance of MPES using several synthetic benchmark functions, CIFAR-10 dataset, and SUSHI preference dataset. |
| Dataset Splits | No | The paper mentions 'initial observations' for BO algorithms but does not specify standard train/validation/test dataset splits with percentages or counts for reproducibility. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as CPU/GPU models, memory, or cloud instance types used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, or other library versions). |
| Experiment Setup | Yes | To evaluate MPES, we set |X | to 20 and the number of samples is n = 1000. The numbers of initial observations provided to the BO algorithms are 5, 6, and 12 for experiments with the Forrester, SHC, and Hartmann functions, respectively. Six initial observations are provided to the BO algorithms [for CIFAR-10]. In these experiments, there are 10 initial observations provided to the BO algorithms [for SUSHI]. |