Near-linear time Gaussian process optimization with adaptive batching and resparsification
Authors: Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | These findings are then confirmed in several experiments, where BBKB is much faster than state-of-the-art methods. |
| Researcher Affiliation | Collaboration | 1Istituto Italiano di Tecnologia, Genova, Italy (now at Deepmind, Paris, France) 2Ma LGa Dibris Universit a degli Studi di Genova, Italy 3Facebook AI Research, Paris, France 4Deep Mind, Paris, France 5MIT, Cambridge, MA, USA 6Istituto Italiano di Tecnologia, Genova, Italy. |
| Pseudocode | Yes | Algorithm 1 BBKB |
| Open Source Code | Yes | Code can be found at github.com/luigicarratino/batch-bkb |
| Open Datasets | Yes | We first perform experiments on two regression datasets Abalone (A = 4177, d = 8) and Cadata (A = 20640, d = 8) datasets. We then perform experiments on the NAS-bench-101 dataset (Ying et al., 2019) |
| Dataset Splits | No | The paper refers to using existing data for initialization (e.g., 'Tinit = 2000 evaluated network architectures'), but does not provide specific train/validation/test dataset splits in the context of model training for reproducibility. |
| Hardware Specification | No | run on a 16 core dual-CPU server |
| Software Dependencies | No | The experiments are implemented in python using the numpy, scikit-learn and botorch library |
| Experiment Setup | Yes | All algorithm use the hyper-parameters suggested by theory. When not applicable, cross validated parameters that perform the best for each individual algorithm are used (e.g. the kernel bandwidth). All the detailed choices and further experiments are reported in the Appendix D. |