Bayesian Optimization of Function Networks
Authors: Raul Astudillo, Peter Frazier
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate through numerical experiments that access to additional information available in a problem formulated as a function network can dramatically accelerate optimization. We study four synthetic problems and four real-world problems: a manufacturing problem similar in spirit to the vaccine example above, an active learning problem with a robotic arm, and two problems arising in epidemiology, one calibrating an epidemic model and the other designing a testing strategy to control the spread of COVID-19. Our method significantly outperforms competing methods that utilize less information, in some cases by 5% and in other cases by several orders of magnitude. Figure 2: Top: Results on synthetic problems that adapt widely used synthetic test functions into function networks. Bottom: Results on realistic problems: manufacturing line, design of testing protocols for COVID-19, fetch-and-reach with a robotic arm, and calibration of an epidemic model. EI-FN substantially improves over benchmark methods, with larger improvements for problems with higher-dimensional decision vectors and more nodes. |
| Researcher Affiliation | Academia | Raul Astudillo Cornell University ra598@cornell.edu Peter I. Frazier Cornell University pf98@cornell.edu |
| Pseudocode | Yes | Algorithm 1 Draw one sample from the posterior on g(x) |
| Open Source Code | Yes | Code to reproduce our numerical experiments can be found at https://github.com/Raul Astudillo06/BOFN. |
| Open Datasets | No | The paper refers to environments and models like 'Fetch environment from Open AI Gym' and 'SIS model' and 'standard test functions', but does not provide specific links, DOIs, repositories, or formal citations (with author/year for the dataset itself) for publicly available datasets. |
| Dataset Splits | No | The paper describes an initial stage of evaluations with randomly chosen points and a second stage for algorithms, but does not specify exact dataset split percentages, sample counts, or predefined splits for train/validation/test. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper states 'All algorithms were implemented in Bo Torch (Balandat et al., 2020)' but does not provide a specific version number for Bo Torch or any other software dependency. |
| Experiment Setup | Yes | In all problems, a first stage of evaluations is performed using 2(d + 1) points chosen uniformly at random over X. A second stage (pictured in plots) is then performed using each of the algorithms. Each experiment was replicated 30 times. |